Creating a New Structure for Job Posts

Looking to improve your job board? Well, my group, consisting of Di, Luis, and Kevin from my Human-Computer Interaction class, decided to give an attempt at solving the unsolved problem of how to display job descriptions. We wanted to create a format that would minimize the time a job seeker would need in order to decide whether or not they want to apply to a job. Currently, the structure of a job post is not standardized within and between different job boards as you can see below from Glassdoor, LinkedIn, Indeed, and Monster.

Our work:
Google drive folder that contains everything
   |   Design dump/rough documentation
   |   Spreadsheet of Qualtrics data
   |   Wireframes
   |   GitHub repository to scrapped JavaScript
   |   GitHub repository to program that can process job descriptions

State of the world (of job boards, and initial thoughts)

One of the reasons why job posts do not have a standardized format is because each company has their own way of creating job posts and most job boards allow companies to post jobs descriptions in any format. Some of these job boards require payment for companies to post jobs, so job boards want to reduce the amount of work companies need to put in. By doing this, less companies are discouraged from posting on their website and the job board can get more revenue. The job board also generates more traffic to their site this way in that job seekers will be able to select from a wider pool of jobs since more companies are posting jobs on that board. This addresses the concern of the job posters who want to publicize their job openings without too much effort (P1). Although this benefits job posters, job seekers have to deal with differences in organization of information. This leads to more difficulty for job seekers to scan through these job posts since the information they’re looking for, such as requirements (S1) or the responsibilities (S2) of the position, aren’t always in the same position in the screen. As a result, job seekers have to spend more time looking at each job post and they do not get to apply to as many jobs as they could if less time was spent looking through job posts (S3).

Job posters also want to hire someone who is best suited for their available positions (P2). This may lead to a long list of requirements, but those requirements will turn down many job seekers. As a result, job posters are left with not many applicants to select from, possibly not finding anyone that they’d like to hire. How should job posters format their job descriptions so that they can attract job seekers who have the right amount of skill but not scare or discourage too many of them?

Summarized concerns

  • Job posters want to:
    • P1: publicize their job openings without too much effort
    • P2: get candidates who have the skills required for succeeding in these positions
  • Job seekers want to:
    • S1: find jobs that they qualify for
    • S2: find jobs that do work they’d want to do or companies whose missions align with the seeker’s own missions
    • S3: apply to as many jobs as possible if they are currently unemployed

I’ll refer back to these in my writing just to show the trade offs we had to think about.

After some considerations about what UI aspect we should change to address the problem, we decided our goal was to standardize the subsections within a job post without creating too much additional friction for job posters in the form of additional forms or questions to answer for their job posts. To reduce friction, our initial idea was that by using Natural Language Processing (NLP) in some form, we’d be able to sort through the job description a job poster posts and reformat it based off of content (e.g. move all the content about the requirements to a different part of the job description). Also, since some job posts don’t have a lot of information, we were also thinking of auto-generating some content based off of what the job poster put into the description. LinkedIn already implements this for the personal summary, so it is possible.

Peek 2018-12-08 02-33

Standardizing all job posts in general is still a bit of a big task though, so we mainly focused on standardizing the format for software developer positions in the US and we assumed that the user was able to work in the US (e.g. doesn’t need a visa). Different fields have different jargon and kinds of requirements, so by focusing on software developer positions, we limited the amount of jargon we needed to account for in our prototypes. We also knew more about the kinds of requirements for software developers, such as knowing specific languages or frameworks, so it would be slightly easier to implement some kind of parsing.

So what did we do? We designed the UI/structure of job posts based off of feedback from users on our designs and ideas, and we implemented a program that can detect programming languages in a job post on Monster or in a txt file. Our program ranks the detected languages on a scale of 0 to 2, with 0 being a language that is not required but nice to have and 2 being a language that is requirement and the job seeker must know the language very well for the job. Although our program does not use NLP, I talk about future steps for integrating NLP into it to make it more robust.

Background Research

The first step to coming up with a direction for our solution was research. We looked at different websites and what they had to say about the job search process and also what people are looking for in jobs. Some websites we looked at include:

These websites show the kinds of problems users might have when reading job posts. One of the main issues was vague job requirements, especially for soft skills. Although a company can ask for good communication, it’s unclear what good communication would consist of or how communication will manifest in the job. However, it is often the case that companies care about these soft skills because soft skills aren’t necessarily teachable. Although people can be guided to obtain a soft skill from a mentor, they have to put in the effort in order to truly have the skill.

Example of learning a soft skill.
For example, people might give updates to their teammates because they’ve been told that good communication includes that. However, just giving updates is the bare minimum and people cannot obtain good communication skills if they don’t go beyond that. For example, they can ask for help, provide constructive feedback on someone else’s work, or let others know that they can’t do anything else for their own work until someone else is done with their work. It might be unclear to some people how to communicate in these ways; at what point do you ask for help, how do you make your feedback not sound like a personal attack, or how do you communicate your needs. If a person is not good at communication, they have to reflect about why their previous communications were not effective and how they can change it to be more effective in the future. By doing this multiple times, then they can truly obtain good communication.

There was also vagueness in defining skills. For example, how do you compare years of experience to words like familiar with or proficient? Although these problems in defining and comparing different requirements weren’t a major consideration when we first thought about our topic, they got us to reconsider how we should present or get this kind of information. Job seekers need to know this information in order to decide whether or not to apply (S1), but job posters are not being clear about what they want because it is hard to define (P1). If companies don’t have well-defined job descriptions, how should we get better ones? We could ask the company to provide us more information, but that is more work on their end (P1). Actually, companies are already being asked to fill in additional information on LinkedIn about the desired skills and education they want from candidates. However, it would be best if we don’t create a system where companies are required to fill in additional information so that the amount of effort required to use our system would be less.

Another website we looked at was Stack Overflow’s Developer Survey. We used content from their job priorities section to create a survey for people, mostly college students, to take. This survey was used in order to assess what kind of information people were looking for in a job post, which would guide us towards the changes we might want to try out, such as putting the information that people thought were more important at the top and information that people thought were less important at the bottom (S1S2).

You can see the results we looked at when we made decisions based off of the responses at that time: https://docs.google.com/spreadsheets/d/1f5s8AlWCH62zPOTxDoK9YhPdYj7wjCa-7Yz7iKUfrC4/edit?usp=sharing

We also looked at current job boards and what were good and bad about their job descriptions and eventually made a very rough wireframe.

Design – Wireframes and User Tests

Round 1

The idea behind our first wireframe was that users would be able to move around the sections to fit it to their preferences. For example, an experienced professional might want to learn more about the job responsibilities instead of the requirements since they might be looking for a specific kind of work (S2), whereas a recent graduate might want to look at the requirements instead in the hopes of finding any job (S1) (sad-face). The experienced professional could move the job responsibilities to the top of the page while the recent graduate could move the requirements to the top of the page instead.

Screenshot_1

We then created some wireframes to test out our idea and whether it made sense to users (for our tests, users were mainly college students looking for jobs) that they could drag the sections around. Our control in our user tests was an “optimized” LinkedIn job description. In the screenshots below, 2 and 3 were what we were showing to users, while 1 was shown after in order to see whether or not we should have the gray bars in 2. Users were asked which one they liked better and what aspects of each they liked or didn’t like.

Our users were split between the modularized design and the optimized LinkedIn design. One common comment that stood out in our tests was that the users didn’t typically understand what the gray bars (click-to-drag area) were. A lot of users thought the bars were there to separate between different block of information. It wasn’t clear to them that the gray area was draggable, partially because it’s hard to signify that something is draggable, but also because it’s not intuitive to be able to drag different sections of a job post around. People usually just read job posts and that’s it. Our user testing got us to reconsider our redesign a bit, and we decided to scrap the idea of draggable sections. Here’s a transcript between my good friend Dogda and me that really summarizes what we got from our first user testing. He basically brought up all of the concerns other users have also found with our modularized design.

Transcript, “first one” refers to our modularized design

Me: Let’s say you were looking at job posts. Which format/organization of information do you like better out of these two and why?

Dogda: why you askin

Me: school

Dogda: do your own homework :rage:
              but it’s obviously the first one

Me: what part of the first one do you like better

Dogda: all the sections are grouped by relevancy

Me: what do you think the gray bars are in the first one

Dogda: what do you mean
              they’re supposed to be different sections right?

Me: uh
        they’re supposed to be draggable areas
        we’re also checking to see whether it’s clear or not
        how we have it in the picture

Dogda: why would I was to drag things around on your ad

Me: have job postings be organized differently depending on your preferences

Dogda: Why would I do that on the fly
              that’s your job

Me: well there’s gonna be a default order
        if people want to change it, they can
        some people are good at filtering searches and some aren’t
        LOL

Dogda: Why don’t you just make them expandable sections
              and put the title on the bar

Me: too much work if users have to expand everytime they open a new post

Dogda: why don’t you just default to expanded

Me: i’m thinking some more
        we could make it save the expanded/not expanded settings i guess

“that’s your job”
Haha wow. We liked his idea of expandable/collapsible sections though, so we decided to incorporate it in some wireframes for our next tests. Users generally like the structure of our modularized design whereas they generally liked the optimized LinkedIn design because it was plain and simple with no extra features, such as the gray draggable bars. At this point, we had a mini panic attack (I was mostly laughing at their reactions though) about having to code up a website that implemented our design and then having user tests on the coded up website. I guess fortunately, we didn’t have to actually code up a whole website, so we made our prototype on Google slides instead. Here’s a GitHub repository of some very rough code though. The JobPost1.html page just shows how making the collapsible sections would work.

Round 2

We thought more about the display of information and got inspiration from another job board, RippleMatch, and from a Chinese job board call Lagou. RippleMatch inspired us with the idea of using icons to display information about required skills, while Lagou inspired us to add (more) tags about the job/job requirements into the top section.

We were thinking of using color to also help with the display of information by making the color correspond to how necessary it is to have that skill. This would solve the ambiguity of the requirements in that different people could spend different amounts of time with a language or framework but still have the same knowledge while two people spending the same amount of time with a language or framework could have different levels of knowledge. Indicating how necessary a skill is for the job (aka, how well a user should know that skill) would help a user decide whether or not they want to apply. For example, if a job required a lot of SQL and candidates should know SQL very well before applying, a user might not want to apply to that job if they don’t know SQL. But if the job requires SQL but only for a small portion of it, the company might be okay with hiring people who don’t know SQL, so the user might consider applying. Using color would also make it faster for people to scan through job descriptions since they wouldn’t have to read as much text and they could get a good idea of the requirements just by glancing at a heatmap.

Here are the wireframes we used for our second round of testing:

We got rid of the idea of using icons to describe hard skills as we made the wireframes for them. It was a bit hard to find good icons for them (and the icons format didn’t look too good either), but we were thinking about whether or not we even knew the icons for different languages. There was an idea that if someone didn’t know the icon, then they didn’t know the language well enough, but it’s possible that there are professionals out there who don’t know what the icon for a specific language/library looks like. People don’t need to know the icon in order to use the language/library.

Screenshot_17.png

We still had some mixed results from our second user test though. For example, some users didn’t like our ordering and would rather have information about the job come before the job responsibilities. I think a part of this was because some users had a different use case where they also cared about the specific kind of work they’d be doing in whatever positions they’d apply for. Users were mostly okay with having expandable sections, although one didn’t even want to have that option available, most likely because he didn’t see a point in hiding parts of the job description.

As for our heatmap, there were mixed reactions to it when it was in one long column in the side bar (because it looked a bit intimidating), but in other wireframes, users generally found it helpful as a visualization of the requirements. Someone might ask here why we didn’t use different colors to visualize the requirements, and the answer to that is that too many colors might be distracting and how would the users know what the colors mean? Different colors don’t have too much of an inherent meaning in them about skill level (unless you think about traffic light colors, but those colors are a bit distracting and may be hard to read). A color gradient makes more sense to people’s idea of skill level in that the strength of the hue corresponds to a skill level, with stronger hues meaning higher skill levels. And this was seen in our user tests where users were able to tell us what the different colors meant.

Users were especially mixed for the bolded format of our job description, especially as they looked at it longer and noticed that words that seemed like they should be bolded, such as “possess”, are bolded. This was partially our fault for bolding too much, but users said that they would like the bolded format if it was toned down since it guided them to look at those phrases. They also liked having the minimum and preferred requirements separated since it made it easier for them to know whether or not they were qualified for the job.

Funny transcript when I asked my sister whether she liked the bolded version over the non-bolded verision of the same job description; first one is non-bolded, second is bolded

Me: which one do you like better

Sister: BAHAHA IS THAG EVERN A QUEISFON
            THE ONE WITH Bded SHIT OFC
            BOLDED

Me: why do you like that one better?

Sister: I CAN ACTUALLY FOCUS ON WHAT I NEED TO POSSES
            like
            obvious programming skills
            but which one
            the othe rone is kinda blocky

Me: there’s not too much bolded text?

Sister: like everything seems way too seamless
            it is a little bit too much
            like
            left side
            is kinda tooooooo much
            kinda
            actually
            LMAO I KINDA JUST WANT
            PROGRMAMING LANGS TO BW BOLDED LMAO
            2+ years part no need
            o shit
            the more i look at it
            the more i like the first one
            LMAO

 That 180 flip from liking the bolded version to the non-bolded version.

Round 3

So, we were supposed to present our final prototype at the computer science showcase, but we used that chance to do some more user testing on the people who showed up. Our prototype did not change much, but we made some design changes based off of the feedback. We decided to make the heatmap horizontal so that it would take up less space compared to being vertical. We also removed the ability to collapse the sections since people don’t generally think of collapsing parts of job descriptions either. We also moved the minimum and preferred requirements to be stacked instead of side-by-side since having them side-by-side seems suggests that there’s a comparison of some kind going. However, users don’t need to compare the two to each other.

For these user tests, we present the user with either real job posts or our modified versions and had them pick a job to apply for out of all the job posts there. We timed how long the users took to choose a job, and they were on average faster. We also asked for their feedback on our design and showed them the other set of jobs posts and asked them for additional feedback. Additionally, we asked them about the Bold View button to see if it made sense to them. The idea behind the bolded button was that not everyone wanted to see the information in a bolded format in our previous user test.

Chart

The main thing we got from these user tests was that they liked the bolded format and would rather have it on by default. The users were also pretty happy about the heatmap, although one wished that the question mark icon for it was more obvious so that they knew where to go if they didn’t know what the colors meant. I’m not really sure how we would do that, although we could move the question mark icon to be directly right or left of the text before the heatmap. For the final prototype, I think it would be best if the bolded format was present by default and if there was no Bolded View button since that’s an extra feature that’s not needed. More screenshots (the question mark icon didn’t get moved)!:

Implementation

So, we have this idea and this design for it, but how would we actually get tit to work? We were planning to implement NLP and named entity recognition (NER) for our prototype for generating the heatmap tags and colors, but we ended up implementing it by looking at the content of each line in the job description. You can view the code here (the repository and file name is a bit misleading because they contain NLP). We also had code to make the heatmap boxes, although in the future, we would want to generate the heatmap boxes with HTML/CSS/JavaScript and a database of the jobs and their tags.

The fake NLP program takes in a job description and then prints out the coding languages and the heatmap levels associated with it, with 2 being the highest or strongest color and 0 being the lowest. It can take in a txt file or a job post from Monster. It compares the words in each line to our own library of coding languages taken from Wikipedia, although we haven’t made it perfect. One problem we had with our implementation was that “B”, the coding language, was getting picked up when the job description used “B” as a list label (like A, B, C). We just removed “B” from our library. With NLP and named entity recognition, what we could do is tag all of the words in the line with their part of speech (POS) and use that information in order to get the languages required and the modifiers that describe what is necessary for the job. For example, “high proficiency” is different from “some proficiency”.

NLP and NER would also allow us to separate descriptions for two different skills in the same sentence. Right now, our algorithm assigns the same score to everything that appears in the same sentence, but with NLP/NER, we could be more specific in that the program would be able to detect that these skills are being talked about separately and that there are different verbs/adjectives being used for each. To take NLP to the next level, we could feed in the information into a machine learning algorithm that will be able to generate our tags and heatmap levels automatically without us explicitly telling it the rules.

We were also thinking that we could use NLP to organize the job description information by getting semantics from each line/paragraph and putting lines with similar semantics together. What we could do is first manually move the job description information around and then also later train an algorithm as earlier (magic machine learning). Also, even more magic machine learning for determining what parts of the description should be bolded. Although there’s no good way to determine what to bold, as long as the algorithm does it good enough, it should be fine. Even as a human with a brain, it’s not always clear what part should and should not be bolded, so an algorithm with some mistakes is tolerable. It can save the job board staff members time from going through the job description and bolding it manually from scratch (some job boards apparently manually match candidates and things like that). And when the algorithm fails, we could update it by feeding it cases similar to where it fails, eventually making it almost perfect?  : D!

Fin

That’s mostly it for our project. Does our design work? Of course it does, why would I say no? But actually, users did find our design easier to read and thought it was helpful especially with the bolded format. Users were also quicker at choosing a job with our design compared to the original job posts. It wasn’t a significant difference, but hey, we only have 6 data points. Users especially liked our heatmap that gave them a quick overview of the requirements.

It would have been nice if we could have coded up an interactive website in the limited time we had. Our prototype solves the problem of displaying job information in a structured format to job seekers, which will reduce the time they spend looking at jobs, but it isn’t necessarily the best solution to solving the overall problem of spending less time going through various job posts. I think the best solution would make it so that the job seeker only sees jobs that they are likely to be interested in based off of their current skill sets and their interests (for example, healthcare industry), while also combining our design of displaying job description information. This will allow job seekers to view less job posts and also view those job posts faster.

RippleMatch and ZipRecuiter does this in that job seekers don’t have to look through as many jobs to find one that will fit them. RippleMatch does it by matching job seekers with companies based on the company’s needs and the job seekers interest. I’m not sure how ZipRecruiter does it but it does it to a lesser extent in that I still get senior positions suggested to me (although the algorithm it uses for emails is better).

An interesting finding once we finished our project was that another job board, Jobbatical, implemented our tags and organized sections ideas already. It’s not entirely the same, but it’s even more interesting that their job board is focused on tech positions from companies that will offer visa sponsorships around the world, similar to how we narrowed the scope of our project to software development positions make it more feasible to catch tags.

Screenshot_21

You know what I learned from this project? Trying to solve an unsolved UI issue is kind of hard. For example, we thought that by making it possible for users to reorganize the structure of job posts, we would be able to accommodate different use cases. The two main use cases were mass applying to jobs and looking for a specific job/kind of company to work at. In the first case, the user only cares about whether or not they qualify, while in the second case, they care about the company and job responsibilities. However, we didn’t consider what job seekers would be thinking as they looked through the jobs. Most job seekers would only be thinking about reading the job posts and whether or not they want to apply. Who’s going to think, let’s move this section of the job post up. The idea of interacting with job postings to reorganize the information was not intuitive at all, so even if users wanted to reorganize the information, they were unlikely to do so.

It led us to the idea of collapsible sections, but even then, do users want to hide information? They can just scroll. What if they miss out on important information from hiding a section? Does it even make sense to hide information on a job post? These kinds of questions led us to our final design where everything is present. Icons? Do they make sense to users? Do they save time if people don’t know what the icons mean or do they waste time?

We had ideas that we thought would have solved the problem, but we didn’t think about the overall context of the situation they were being used in, leading us to rethink about using those ideas once we did try them out. Always keep in mind whether your design makes sense! It might sound like a great idea at first, but in reality, it might not help or do anything. But trying out the ideas on users is good too, since it’s not always clear what kind of changes should be made.


By the way, if you’re interested in the icons, here’s where we got the icons from: https://www.flaticon.com/

Santi as a Chatbot (SaaC)

Edit 11-2-18 0:57 AM: Added SS of Dogda’s reaction to my blog post

Edit 11-1-18 1:01 AM: Changed the content to have more under The (Nintendo) Switch  : x

Hello my imaginary friends and followers. I just wanted to give you all a quick update that if you follow me, you can truly become imaginary like this one follower:

spamFollower

Anyways, the third project for my Human-Computer Interaction class is to create a chatbot for some kind of purpose that requires a human connection. Some example my professor gave was teaching someone a specific topic, counseling someone who’s struggling emotionally, or debating on a contentious topic. I did this project with Leo, Eddie, and Ethan, and our original vision was to create a chatbot to counsel first years who were feeling lonely/isolated. However, it changed to helping athletes who were having difficulty transitioning to Oxy with a focus on soccer since Ethan is on the soccer team.

Our final chatbot can be found at https://github.com/nguyen41v/oxycsbot. Just make a copy or download the repository and you can run it! It was a little too hard to try to get the slack app working in other Slack workspaces, but if we are able to do that, I’ll add the link to add the bot to workspaces here.

The constraint that your chatbot should tap into human connections is part of the challenge, and asks whether chatbots can do more than scheduling a hair salon appointment.
As you are designing the chatbot, keep in mind that these are topics where “multiple choice” responses are not appropriate, nor the simple parroting we’ve seen with ELIZA. In fact, these are situations where saying the wrong thing may cause more harm than not saying anything at all.

Our bot’s name is Santi, named after another one of my imaginary friends.

Talking to the real Santi

DogdasReaction
Dogda’s reaction to my blogpost/bot

Brainstorming for the Initial Focus:

The first step for our project was choosing what we wanted Santi’s focus to be. Who should it be for and what should it do? We settled on the broad idea of counseling students who are feeling isolated at Oxy since we had no preferences on what Santi should do. Our first task was to come up with conversation ideas, so we decided to do some individual research on college loneliness and put information we found into a list.  Coming back together, our research gave us the idea to focus only on first year students since they are more likely to be experiencing loneliness considering that college is a different environment than high school, and first years are also surround by many faces they’ve never seen before. After creating some possible responses, we tested the bot with Junepyo, an Oxy student.

His user testing gave us some insight as to what kind of things didn’t account for, such as relationship problems. Junepyo also gave us some feedback saying that he didn’t like how Santi always responded with the same “Sorry, I’m just a simple Jane . . .” when it couldn’t do anything with his response. His feedback got us to thinking about randomizing the responses Santi would give so that it would seem more human. Junepyo was also referring to the bot as Santi when he gave us feedback, so we decided to keep that name and also change the responses so that Santi would refer to itself as Santi.

 

The (Nintendo) Switch:

However, as we generated more responses and talked more about the flow of the conversation, we realized that what we wanted to do would require a lot of different responses and coding; we would need to keep track of what the user has already talked to Santi about and whether Santi has been in a certain state before or not since most of our states link to other states or back to itself.  It was slowly getting more and more complicated and required more and more responses as we put more possibilities into consideration. The work required to implement Santi was exponentially increasing as we progressed through it. We also didn’t know how to handle the darkness; what do we do if students need serious help that do warrant a psychologist or therapist?

lonelyFail
I know, schizophrenia is a bit out of place.

We tried to account for those scenarios, but this fact combined with the fact that our topic was a bit too broad caused us to reconsider Santi’s purpose in life, much like how some first years might be doing to themselves.

Link to branch containing code from our original project focus: https://github.com/nguyen41v/oxycsbot/tree/lonely

Eventually we decided to change Santi to focus only on helping first year athletes transition from high school to Oxy. This was basically a sub-topic from our original vision, and we picked it because our team members have more experience with sports at Oxy compared to some of the other sub-topics, such as family and homesickness. Since we decided to only focus on athletes, we discarded a lot of the previous responses and flows since they weren’t related to sports, such as academics or art. This made implementing Santi a lot easier since we didn’t have to account for possible responses in those directions and we only needed to account for possible responses within the realm of sports.

We went back to the white board and worked more on creating the conversation flow, implementing it as we went in the code (albeit, a very basic version of the conversation).

We tried to keep Santi as an upbeat and understanding bot. The upbeatness of Santi would make Santi seem enthusiastic about sports and the user, while the understandingness will comfort the user. The first thing we did was create an introduction for Santi that will let users know what Santi does. Santi’s introduction will make it clear to users about what they should talk to him about.

User: Hi there!
Santi: :shocked_face_with_exploding_head:Hello, I’m Santi.
I help college student athletes better transition to college sports at Oxy.
How has your transition in sports been?

After that, we decided to filter users to different flows based on whether they’ve been having a good or bad transition, routing students to the bad transition path if they mention some problems while they’re on the good transition path. An example of a problem is a user indicating that they don’t think they’re doing well in the SCIAC conference. We thought about generic responses Santi could have, such as:

Santi: Have you told your coach?

Santi: I would recommend talking to your teammates. That might help with your transition. Do you have any of your teammates’ contact information?

These kind of responses might be able to provide direction for what the user can do to improve their situation, and it also moves the conversation forward. Not too many problems came up while we were designing the conversation flow, however, we did realize that the conversation was composed of mainly yes or no questions, which we tried to fix. Some ways we did this included asking the user to tell Santi more about their situation or asking the user about their mentor’s name.

Another thing that came up later on was the question of what to do if someone entered in a mentor name that the bot did not recognize. We could create a function that would loop on itself as long as the user does not enter a valid name, but what if the user never types in a valid name and how do we make the prompt to enter a valid name seem more natural? What we ended up doing was keeping track of the number of times a user entered the function, and having Santi give a different response if it’s not the first time the user is in the function (just realized that we never reset this value . . .). We also added new tags for words a user might have in their response that might indicate that Santi should stop asking them for their mentor’s name. For example, a user might say that they forgot or don’t remember their mentor’s name. As we created the flow, we did also did basic checks and tests just to make sure that functions linked to the right state after different responses.

The Final Stretch of User Testing and Changes:

Once our flow and code were mostly done, we did more in depth testing with actual responses to check what kind of other responses users might input that we might not catch with our current tag setup.

There were some funny moments in our testing, but it made us realize the variety of responses users could have, such as “not bad”, “not that well”, or “fine”. We implemented some more tags so that we could capture these more neutral responses because we were only considering positive and negative responses, and Santi shouldn’t say “That’s great to hear!” to a neutral response. Our tag was called “medium_rare” : ) We also added other tags and functions in order to make Santi a bit more sociable since it didn’t really converse as much at that point (but making a sociable-er Santi is hard). I got to teach Leo about using the terminal for git a little bit throughout our project. We had fun playing around with the terminal and merge conflicts in git (turn up your volume when you play the video).

Too bad it’s not Christmas yet. Shortly after this video, I looked up how to select lines/blocks of code to delete (in vim, you can do shift+v, select the lines you want to delete, and then d; putting in “:[#lineStart],[#lineEnd]d” or “[#lineStart]Gd[#lineEnd]G” apparently works too but I haven’t tried it yet).

We did some more user testing with other people and ourselves and found some bugs or tags we should consider, doing some more tests after each update to fix those bugs. Our user testing was done on other students, since all students have been first years before. We told them that Santi was meant to help first year athletes transition from high school to college.

The following video shows an example of what the conversation would be like for students experiencing difficulty in their transition. We implemented “haven’t” after this video, but the video is a good depiction of what kind of responses Santi would give.

This next video shows how Santi would interact with students if they weren’t experiencing a bad transition. In contrast to the previous video, Santi has more positive responses. Not all of the possible paths are shown in these two videos though, as Santi is also able to provide contact information for mentors on the soccer team, something that other sport teams do not have, among other responses.

We did multiple tests, adding some quick fixes with each one based off of the transcripts and then we were DONE! GitHub repo in case you want to see it again:
https://github.com/nguyen41v/oxycsbot

Ending Remarks:

Overall, the process of making Santi enlightened me on the difficulty of making a hybrid system (compared to Eliza based systems that takes in user responses and looks at the message structure to provide an appropriate response or dialog systems that have preset responses for users, hybrid systems take in any user responses and breaks them up, and based off of the content, it selects a few preset response) for conversational interfaces or chatbots. I knew from the start that it would require a lot of work just because of the way we were implementing it in python through the use of tags and specific states for different responses. This whole set up means a lot of hard-coded responses and tags, which means that we have to think of all the possible outcomes and what is common and different between them in order to send Santi to the right state. Sometimes similar responses can mean very different things, and Santi had to be able to catch that. For example, “not good” and “pretty good” both have “good” in them but they mean very different things. Santi had to be able to differentiate between those two phrases in order to give an appropriate response.

Unfortunately, we weren’t able to implement our initial vision of helping first-year students that feel isolated because of the amount of coding required for it, but we were still able to implement a small part of it specifically for athletes. We had also wanted to implement giving contact information for coaches and faculty on sports team, but we did not create a conversation flow for that and we also didn’t have all their information (you can view the start of our implementation in our code at line 705 though). Let’s take the time to appreciate all the work and effort people have put into designing and creating non-machine-learning based hybrid conversational interfaces.

Japanese Goblin

So I showed one of my imaginary friends, let’s call him Dogda, my blog today, and he told me that there were no adventures to look at. This post is going to be a short “adventure” (not really) post!

Here is me introducing Dogda to my blog.
DDchatA

As you can see, he doesn’t entirely approve of my content.
I have no clue where to go from here to make this post an adventure. I’m just going to make up some events that are really imaginary.

Dogda and I decided to take a few laps around his home in Florida while we talked about random things, like how he wants to do unpaid work for me even after retirement : )  It was nice and all, but I kept getting bit by mosquitoes and he did too. I wanted to stay out longer in order to experimentally determine who got bit by mosquitoes more because he said that whenever he’s around, no one gets bit by mosquitoes. Eventually, he convinced me to go indoors since it was so hot and humid. As we went inside, we ran into his sister that I never saw. Dogda always talks about his sister, but she’s not home often. It was a rare occasion! Dogda took out some board games and we started playing them. At some point, the lights went out and I asked Dogda if there was a hurricane nearby or something. He told me there was, but that he wasn’t evacuating and that the only reason why he let me hang out with him was because of the hurricane. Friends through thick and thin right?

The hurricane wasn’t that bad (I mean, it didn’t exist for me). There was a lot of water though, so we got a kayak and started to kayak around his house. Gotta make sure we don’t kayak into cars or anything. It was quite refreshing to be outside after the storm weather, but then Dogda tipped my boat over >: ( Then I went to him and tipped his boat over. We ended up hitting each other with tree branches while wading through the water. It’s like trying to fight someone in space kind of. I kind of broke Dogda’s arm with my tree branch (that’s right, my tree branch broke his arm). He stopped talking to me after that . . .

End story.

Here’s a screenshot of how the blog title was decided.DDchatss


I behave like a particle?

Design a site like this with WordPress.com
Get started