Book Cover Generator

Screen Shot 2017-05-13 at 12.28.46 PM

I decided to update my Brief History of Time book cover generator for the final. I liked the project and it seemed like there were some clear ways for it to move forward. I was interested in adding ways for the user to adjust the generated covers before evolving the next generation. I liked the idea of the user collaborating with the algorithm in the creative process.

The first step of the update was to refactor the code of the book cover. In the original version the genes overlapped quite a bit in their expression. One gene could be expressed in 3 or 4 circles. Adding a slider to change the color of the circle, for example, would also change the color of several other circles. I fixed this by adding many more genes and by updating the for loop used to generate the circles so that no genes are used more than once. (Though in class Shiffman suggested maybe using a node based approach instead of genes, so there’s that to look into.)

I also switched from drawing many covers on one canvas to drawing each cover as a p5.Renderer object. I would like to eventually add functionality for the user to download the cover they designed, and this would allow that.

Screen Shot 2017-05-13 at 12.31.30 PM

After doing that I added sliders that controlled various aspects of the book cover. I started with the positioning of the title and author text and the color of the background. It would make sense to add lots of other tools as well, like moving the circles, changing their colors, and maybe ultimately an interface more like a drawing program. But, I wanted to try and get this working with the genetic algorithm.

I could not get the covers to generate correctly, but am still working on it. As I think about this project more I think that, in addition to adding all of the unrealized features mentioned in this post, it would be neat to try and generate the original design for the cover based on the text of the book. That way, you could feed in a text, have the program evolve the first generation and then go from there. There are all kinds of text analysis that could be used, like sentiment analysis, most common words, or story arc, and linked to different color schemes or design styles. That would be super cool!

Can I Eat This Mushroom?

Mushrooms are pictured, on October 20, 2012 in the Clairmarais' wood, northern France. AFP PHOTO PHILIPPE HUGUEN        (Photo credit should read PHILIPPE HUGUEN/AFP/Getty Images)
Mushrooms are pictured, on October 20, 2012 in the Clairmarais’ wood, northern France. AFP PHOTO PHILIPPE HUGUEN (Photo credit should read PHILIPPE HUGUEN/AFP/Getty Images)

 

I would like to make a program that predicts if a picture of a mushroom is edible or not. I have found a mushroom dataset here, but I am unsure if it has enough data. This project is loosely affiliated with my thesis, a webapp to get people into the outdoors, but is primarily a way to learn about and play with the technology we’ve been learning in class.

Adventure AR Trail Guide

ar_guide

 

Concept

AR can reveal hidden worlds around us. Often these worlds are charming fantasies. I think it’s more powerful to reveal the layers are already here, but go unnoticed. I am creating a web app that encourages groups of friends to explore parks in and around New York City. As a part of that, I have created a trail guide that provides directions through the park, as well as providing information on points of interest in the park. I think that AR is well suited to this guide because instead of looking at a picture in  a book, on a printout, or on their phone hikers will be able to see the information situated directly on top of that they are seeing. I hope that this will be a very clear learning experience and that it will underline how much is hidden in the world around us that goes ignored.

wireframes

 

I selected Inwood Hill Park as my proof of concept location for this project. Inwood is a good ‘just right’ park for this project. It’s not too big, but not too small. Not too far away, but not as close as Central Park. It’s not a manicured park, but it’s not so wild that it would make first time adventurers uncomfortable. Also, it is a very natural park. It has relatively little landscaping in its wild areas, most of its features were designed by nature not Frederick Law Olmsted.

I had done some previous research about features and paths in the park, but work for this project started with an in-person visit to the park. I wanted to make sure that my path made sense and to identify places where AR elements would make sense. I identified three points for possible AR experiences: the shorakkopoch rock (it marks the supposed site where the Dutch bought the island from the Lenape), a glacial pothole, and whale rock.

Whale rock seemed like the simplest site to augment, so I started with that. The rock has several deep groves in it created by glacial movement. I wanted to create a way to highlight these glacial striae.

gs

 

I imagine a simple overlay that would show where the visitor would be looking. Unlike other AR experiences, these moments should not be immersive and time consuming. The main point of the project is to get people out into nature, not to have them looking at their phones the whole time. These short digestible moments will give people an ‘A ha!’ moment, and then be over.

Build

Github Repo

I elected to build this project with the Motion Stack library. This would allow me to keep these moments simple and in the browser, but still give me access to the sensors on the phone. The Orientation Cube and the RelativeHeading Image Panning functions both seemed particularly relevant. It was easy to build a working demo from the example code. I also built a short demo from a tutorial on accessing the phone camera from the browser.

However, the camera demo did not work well on mobile. I was unable to access the rear camera. After some frustrating debugging I was able to find two problems 1) Accessing cameras from the browser does not work well on older hardware, and even slightly older versions of Chrome. This effected me pretty significantly, because I’m an iPhone user and the phones I was testing with were older hand me downs. 2) The example code I was following on MDN did not seem to work. Adding a constraint to use the rear camera broke without giving any errors in the console.

Using MediaDevices.enumerateDevices() and then selecting the camera was successful. I built based on an example here. Now that I had both elements of the experience working, I could start combining them. That produced three bugs to be fixed. 1) The pano/overlay image covers the whole viewport, so it’s impossible to select the rear camera. This is the most fixable bug, but also the most frustrating because I had just spent so much time getting the camera to work. 2) The pano/overlay image is doubled for some reason. 3) If I remove the audio selectors from the HTML the video fails to load.

Screen Shot 2017-03-12 at 5.59.15 PM

Next Steps

  1. Fix the bugs. They all seem fixable, but annoying.
  2. Get HTTPS site. I currently don’t have one setup, so I can’t host code that uses the webcam until I do.
  3. Work on the UX for the AR experiences specifically. When do I need to explain what glacial striae are? Is there an intermediate screen before the AR? What should be there?
  4. Test with an object. I can’t go to the park every day to test, so building a mockup of some kind to refine the interaction with seems like a good plan.
  5. User test. I need to make sure that this makes sense to other people too.

 

 

Tunnel VR

This project is a short experience for Google Tango. I wanted to play with the link between tango and the real world, so I decided to create a VR space where the motion of the user was more controlled. Building some kind of tunnel seemed like a good way to do that. Also, the assignment for this project was to tell a story and moving between two spaces seemed like good way to convey transition.

Like all of the other projects this semester, I spent a lot of time setting everything up. I hadn’t installed anything for developing on Android, so installing that was a pain. Also, finding the file path for the Android SDK and Java SDK, then putting that information into Unity took more time than I would like to admit. I think the only reason that process wasn’t soul crushingly frustrating was that a bunch of us sat down and did it together.

After I got everything installed things went pretty quickly. I decided to build my space using only Unity default meshes. I created and moved a bunch of cubes to create a short stretch of the tunnel, then copy/pasted that section to make a longer tunnel. I wanted to emphasise the change of space for the big room at the end, so I made the room very tall and added the large sphere. I hopped it would be a cool space, but I’m not sure it achieves that. I added the glowing pink balls and the particle systems to up the ‘cool factor’, I think they ended up looking pretty magical. Finally, I made the lighting in the big room very pink and put a teal light at the beginning of the tunnel. That way, you move into a pinker and pinker space.

I like how this turned out, especially so since making I was able to concentrate on building and lighting for a lot of the time instead of fighting bugs. I think that audio is the only thing missing from this little sketch.

 

MW&MUR Haunting

Haunting

 

In thought that it would be interesting to try and make a ‘haunting’ for this assignment. Maybe something where the app reveals a ghost, then the ghost follows you around. That seemed like it could be easy to achieve, since the location services code seemed understandable and straightforward to use. Also, I spent some time building the Unity Roll a Ball tutorial this week, so I was feeling better about Unity too.

I did some YouTube research and I found a tutorial that seemed pretty close to the thing I wanted to build (here). This seemed like a great fit for me. I am still having some trouble understanding how things are constructed with Unity, following this would let me follow along with building the project and then tinker with the project afterwards. Plus, this would be an opportunity to learn how to build to my phone. Great!

I built the example project, but I had a great deal of trouble getting Unity to build the xcode project. After exhausting my google-fu and asking Rui it seemed like the best coarse of action was to rebuild the project I did that, and spent several hours updating unity, xcode, and my os, I finally got a build on my phone working.

IMG_3948

 

The GPS location data doesn’t seem to be working yet, but it seems close. Right now I’m not sure how to go about debugging that problem.

Augmented Object

mountains

 

My original plan for this assignment was to use an image of mountains in the Catskills to create an informational layer that would identify the mountain and perhaps give some information about it. I would love to include an AR component in my thesis, which is largely about the outdoors and this would have been a little experiment for that. However, the images I was working with were not strong trackers and Unity never picked up on the image.

So, I tried something else. I have this pillow with a pretty bold pattern on it:

PE582211

I thought that it would be neat to try and bring the little faces to life. What are they like? What are they up to during the day when I’m not home? Do they resent being squished all the time?

faces

 

That plan didn’t work either. Even though the tracking image was pretty strong, it never picked up. Do patterns on a 3D object not work well? Maybe?

 

Screen Shot 2017-02-08 at 10.07.02 PM

 

At this point I decided to go for just making something work. Anything. I choose to do this week’s project alone so I would have to do all the work myself and learn about Unity (since I’m a total n00b). I’d hoped to be able to put in some animations and cools stuff, but at this point I was really frustrated and just getting the darn thing working was my main focus. I decided to try augmenting a book, since their covers are flat and graphic and should make good targets. I picked Hyperbole and a Half for my book. The cover seemed like it would work as a target and the cartoon style of the book seemed like something I could replicate with stock shapes in Unity. Maybe I could make the book into the character from the book. Also, the harried, crazy, lost tone of the stories suited my state of mind that this point.

hyperb

 

I did eventually get something working, but not as much as I would have liked. It took me some time to get my head around simple things like rotating objects (seriously, it is so counterintuitive!) but eventually I got something working. It’s not pretty, but I did learn a bunch about Unity.

week2 AR from coldsoup753 on Vimeo.

 

Hacking Political Rhetoric Final

Screen Shot 2016-12-17 at 4.48.57 PM

 

http://itp.jscottdutcher.com/eye_video2/

Where you look is what you see.

This project is a personal reflection on the 2016 Presidential Election. In the aftermath of the election there has been a great deal of concern about filter bubbles and fake news. It really seems like everyone was looking in a different direction for their news. With this project I am hoping to illustrate the contrast between the narrow slice of the media people consume and the breadth of what is available.

It’s a simple premise, but one that struck a chord with me. I have found myself thinking more and more about the unseen and unknown, and this was another opportunity to explore that concept. What do people not see and why? Does confronting people about the difference between what they see and what is available have any effect? And simply, what things are unseen? I wouldn’t say that I lean on any of these ideas particularly hard in this project, but they are the thoughts that I have been playing around with recently.

I proposed to use eye tracking software to create an experience where only the video someone was looking at was clear, while everything else just faded away. The proof of concept transition that I ended up creating start faded and become clearer when you look at them. This ends up producing an effect of revealing something hidden, like turning over a rock to see what is underneath. I think that it produces the same contrast between the unseen and the seen, but now the viewer may be inclined to see more and discover something unfamiliar.

The main challenge for this project was simply getting all the code to work. I picked webgazer.js for my eye tracking software, largely because it appeared to be the most up to date and well maintained option available. This meant that my project had to live in the browser, which in turn meant that I had to figure out how to deal with all the videos I wanted to use without the whole thing grinding to a halt. Simply embedding youtube videos ended up being the best solution.
A remaining challenge is integrating webgazer.js into the project. I need to setup a site with an SSL certificate in order to use a computer’s webcam. Also, I need to figure out how to get the tracking data out of the canvas and use it for triggering the transition. However, based on my tests it does look like webgazer will run with all of the embedded videos. That’s a huge improvement over previous versions!

Eye Tracking Test from coldsoup753 on Vimeo.

UnseenPlacesUSA Documentation

  https://twitter.com/UnseenPlacesUSA

[Github Link coming soon]

Concept

UnseenPlacesUSA is a Twitter bot and dataset containing the name, description, and the geographic coordinate of ‘unseen places’ in the United States. These places are locations that are unnoticed due to their remote location, or because we choose to put them out of mind. The bot Tweets these places with a sentence describing the location, a Google Maps link, and a satellite photo. The ‘unseen-ness’ of these locations is subjective. A prison is only unseen if you do not know anyone in the prison system. A power plant is only unnoticed if it is not in your neighborhood. Even so, I believe that most of these places are unfamiliar to many people. I hope that by recording the locations and making them more public, people can discover locations they have never heard of, but more importantly that neglected places will be re-considered. Finding an unseen place is an opportunity to consider why that place might be unseen, if its  neglect is appropriate, and what that might say about us. A Twitter bot is an excellent way to perform this data. It allows the places to be considered individually, with a degree of measure. The bot also feels like it is sharing a secret, which is exciting. Pushing the places into a conversational sphere invites discussion and bringing these often remote locations into an intimate space, the tweet will be seen on someone’s phone or computer, contrasts both the size of the physical location and the scope of the systems that the locations represent.

Implementation

The UnseenPlacesUSA Twitter bot is built on Node.js using Twit and the places data is stored in MongoDB. The main challenge of this project was collecting the data itself. I started the dataset with a list of unseen places that I thought would be interesting and then tried to find location data for those places. Most of the data comes from Wikipedia. Wikipedia contains many lists of locations, such as federal prisons, wind farms, and national monuments. I wrote a web scraper that uses node-scrapy. The scraper will run through a list of location names, search for the Wikipedia page, and then scrape the location data from that page. If there is no page or location data, the scraper will write the place name into a file, so I can look up the information manually later.

Other data comes from hobbyist sites, the missile silo data especially, and had to be converted from sexagesimal notation to decimal notation. That was done using formulas I found here. I also wrote a script that takes a street address and converts it into decimal coordinate notation, using the Google Maps API. This was particularly useful for datasets that only contain street addresses, such as the list of cattle feedlots I copied from the American Angus Association.

All of these scripts convert name, location, and description data into a document in my database. I chose to use a database instead of a json document to give this project room to grow in the future. I was also happy to have the chance to learn about using databases.

I had originally planned on setting up some kind of web interface for adding locations to the database, but after processing all the data I have collected so far it has become clear to me that most datasets re individual enough that it would be more work to write the code for a site that can handle them than to simply tweak the templates that I have already created.

Next Steps

In the short term, I would like to build a small dashboard for the dataset. I would like to be able to see what kind of places and how many are present in the dataset at a glance. I also think that it might be worth doing more research into the data I already have. For example, I am interested in differentiating between publically run state prisons and privately run state prisons.

Something else worth considering is how important completeness is for this project. It is not important to have an exact and complete list of all the landfills in New York State, for example, when the information is being tweeted. Each tweet is individual and is not considered as part of a whole. However, when the same information is shown on a map, missing information might become more visible and important. Data omissions also have meaning.

There are also potential new features for the bot. It would be interesting for the bot to be able to tell someone an unseen place, if they tweet a location at the bot. The bot could also be a good way for people to suggest locations they would like to add. Sharing on Twitter is a great way for the project to gain visibility.

I am also excited to explore what kind of future projects this data might lend itself to. I am personally interested to see what it looks like when I bring up all of the satellite images for a state. Will there be commonalities I had never noticed before?
And of course, there is always the ongoing work of finding more unseen places to add.

Educate the Future Final Documentation

 

https://github.com/jessipedia/chat_proj

Class: Educate the Future
Fall 2016
Interactive Telecommunications Program

Overview
This course has asked you to evaluate the need for higher education. You have observed current problems, solutions and imagined new ones for Higher Education, 1 year, 5 years, 10 years into the future. How will people learn? How will teachers teach? How will you measure your academic success? How will students connect to peers and experts? Who will be able to attend this future? Will higher ed be on your wrist or in a building? Will education be gamified? This documentation of your final reaches to answer some of these course objectives.

Research / Backstory

“Globalization is a proxy for technology-powered capitalism, which tends to reward fewer and fewer members of society.”

-Om Malik

I am interested in continuing education for adults.  I think that this is an overlooked area now, and that the need for more prevalent and comprehensive continuing education will only grow as more jobs become automatable.

Although the number of jobs threatened by automation is poised to grow massively in the near future, 47% of the jobs in the US are at risk of automation, this is hardly a problem for futurists. This is a problem for the now.

I’m an upper middle class design and technology student living in New York City. This is not an issue that is going to affect me right away, if it affects me substantially at all. The careers I am training for are projected to be pretty safe from future automation. My economic position sets me up for all kinds of social blind spots where job loss and economic vulnerability are concerned, but it remains an issue I feel deeply about.

I grew up in Rochester, NY home of Eastman Kodak. The imaging company still exists, but to say it’s a shell of its former self does not begin to describe the transformation of the company. Kodak has gone from being a tech giant to a glorified Kinkos. I was very young when things really started falling apart, but I remember watching that company die. The local news was constantly reporting waves and waves of layoffs. I listened to my parents talk in worried tones about which of their friends and acquaintances got their “pink slip”. Local churches started support groups for people dealing with the emotional toll of job loss. Managers with kids in High School started over as cashiers at the local supermarket. In fifteen years about 27,000 people lost their jobs.

But that’s not the part that stays with me, it’s this: today 33% of the people who live in the city of Rochester live in poverty. Household incomes across Monroe County have fallen, even in affluent towns and neighborhoods. By many metrics the area is well past the point of ever being able to recover. When Kodak died the city took a blow it can never come back from.

Similar stories have been playing out in factory towns across the country for some time now, but something that always strikes me about Kodak’s story is that it wasn’t limited to blue collar workers. Waves of job loss hit people seemingly indiscriminately. That is the kind of future increased automation may have in store for us. Education is hardly the only answer, but making sure that people have constant access to new education and information seems like a strong place to start.

 

How Technology Is Destroying Jobs
Baxter: The Blue-Collar Robot
SILICON VALLEY HAS AN EMPATHY VACUUM
Will Your Job Be Done By A Machine?
Elon Musk: Robots will take your jobs, government will have to pay your wage
Bill Gates on the Future of Employment (It’s Not Pretty)
The Future of Employment Report
How the Recession Upskilled Your Job
Labor Market Recovers Unevenly

Benchmarking Rochester’s Poverty
In Kodak’s town, life after layoffs

Problems

My original research problem, “Lack of accessible, well made continuing education resources for adults who need to re-skill”, was broad enough to be incorrect. Not all adults lack well made continuing ed. Some people reskill just fine. Breaking this down ended up creating more questions:

  • How can we help learners assess their own skills when they are looking to enter a new industry?
  • How can we create quality, low cost material to help students learn new job skills?
  • How can we make sure that continuing education resources are available to as many people as possible?
  • How can we give as many students as possible access to great teachers?
  • How can we help teachers reach more students?
  • How do we assess what skills will be most valuable in the job market?

Design Challenge

I ended up picking “How can we help learners assess their own skills when they are looking to enter a new industry?” as my design question.
Interviews

I spoke with four people about their experiences moving into a new industry for this project. One was a woman who moved from marketing and PR to nursing, two were ITP students who left advertising to study for a creative career at ITP, and one was a woman who was trying to move from general non-profit management to arts non-profit management but had not yet managed to make the switch.

Common themes from the interviews were:

  • The importance of having a plan. Not having a plan extends the uncomfortable period of not-knowing, makes people feel aimless, and that this is an insurmountable problem
  • All most all of the people I spoke with seemed to know early on what kind of career change they needed to make, but did not realize that they knew.
  • People faced a set of unknown unknowns, not realizing that the things they were truly passionate about could become a career
  • The feeling of being alone on the journey

 

Design

The points that struck me the most from my interviews were: the fact that people seemed to know what they wanted to do, even if they didn’t realize that they knew and that people felt lonely while they were going through this process.

I decided to work on a solution that would address these two issues. I also decided to limit my audience to adults about 20-30 years old. That was who I had interviewed and who I would be doing my user testing with, so it made sense, but it also seemed like helping someone at this stage of their career might have more impact than people who are mid career. It’s easier for a 25 year old to change their direction than a 45 year old and, if you make a change when you’re younger you might save yourself decades of work misery.

I ended up designing a chat bot that would prompt users with questions. Hopefully these questions would help people consider their career and realize that they knew what direction they needed to head off in. I hope that a chat interface also helps people feel less lonely. This is not a solution that would work for people disinclined to be introspective and honestly answer questions about themselves. I this kind of excercise you get out what you put in, so if you do not put much in you probably will not reap any insights.

When designing the bot I assumed that the programming itself didn’t actually have to be that complex or ‘lifelike’. I think that dumb AI is better, not for any anti-technology reason, but because it allows people to project more of themselves into the conversation. Also, I think that people are less frustrated when they can quickly understand the limitations of an interaction. I found ELIZA from 1966 to be an exellent source of inspiration.

 

User Testing 

Insights from user testing:

  • About half of the people tried to break the bot, which the bot was not able to sustain
  • Many people wanted the bot to have more personality
  • Many people liked that the bot did not have much personality
  • People liked that they could review the chat history on their phones
  • Kyle wanted to be emailed ‘results’ or ‘insights’ from his sessions
  • Some people did not like the discussion model used in the bot, wanted something more like CBT
  • People were a little confused at first about what the bot was
  • People wanted the bot to be a general therapist, not just talk about career goals

Next Steps

More robust chat program – Everyone who I user tested the bot with tried to have an interaction with it that the bot couldn’t handle. This varied from asking it questions about itself, to outright trying to break it. I think that many people’s first instinct with a bot is going to be to try and break it, so being more ready for that is a must. Also, people were interested in the ‘character’ of the bot, which my test version was largely without. Adding in content about that bot’s ‘self’ would be good. It could also be a good way to give more information about the point of the project and  set the tone for the kind of answers the bot is expecting.

From a more tech point of view, this bot barely works when it does work. It cannot adapt and doesn’t use websockets, so only one person can chat with it at once. A much more significant buildout is needed if ever expand the project.

The ability to suggest – Something that I got from my user testing is that people often don’t know what kinds of opportinities are available for them to persue thier interestes. I am hesitant to try and build a bot that simply tells someone what they should do with their lives, but one that can suggest popular books related to their topic of interest, or find meetups seems like it would help.

The ability to ask how things went – The main point of this project is to help people consider what they want from a job or career and to reflect. Checking in with people after they do something on their work-path, like going to a meeting or having an informational interview, seems like another strong opportunity to help people reflect.

The Future

This section is kind of an addendum to the project. This project had me considering the future of AI assistants. Most of the AI assistants we have now (Amazon Echo, Siri, Cortana, Google Assistant) are directed assistants, they only interact with you when you talk to them first. These kinds of directed assistants seem to be common in how we think of the future of AI assistants. However, I don’t think that these kinds of assistants help us be better humans.

An example from my presentation is Ask Jeeves vs. Literary Jeeves. The first give you answers when you ask it directly, but the second is interested in helping someone be their best selves. I think that we should be trying to design artificial assistants to help us be better people, not just all-knowing lazy people. At first glance this challenge seems like a technical challenge that calls for all kinds of fancy sentiment analysis and machine learning technology, but as the ELIZA project shows you can get a long way with simple technology and good writing.