Adventure AR Trail Guide




AR can reveal hidden worlds around us. Often these worlds are charming fantasies. I think it’s more powerful to reveal the layers are already here, but go unnoticed. I am creating a web app that encourages groups of friends to explore parks in and around New York City. As a part of that, I have created a trail guide that provides directions through the park, as well as providing information on points of interest in the park. I think that AR is well suited to this guide because instead of looking at a picture in  a book, on a printout, or on their phone hikers will be able to see the information situated directly on top of that they are seeing. I hope that this will be a very clear learning experience and that it will underline how much is hidden in the world around us that goes ignored.



I selected Inwood Hill Park as my proof of concept location for this project. Inwood is a good ‘just right’ park for this project. It’s not too big, but not too small. Not too far away, but not as close as Central Park. It’s not a manicured park, but it’s not so wild that it would make first time adventurers uncomfortable. Also, it is a very natural park. It has relatively little landscaping in its wild areas, most of its features were designed by nature not Frederick Law Olmsted.

I had done some previous research about features and paths in the park, but work for this project started with an in-person visit to the park. I wanted to make sure that my path made sense and to identify places where AR elements would make sense. I identified three points for possible AR experiences: the shorakkopoch rock (it marks the supposed site where the Dutch bought the island from the Lenape), a glacial pothole, and whale rock.

Whale rock seemed like the simplest site to augment, so I started with that. The rock has several deep groves in it created by glacial movement. I wanted to create a way to highlight these glacial striae.



I imagine a simple overlay that would show where the visitor would be looking. Unlike other AR experiences, these moments should not be immersive and time consuming. The main point of the project is to get people out into nature, not to have them looking at their phones the whole time. These short digestible moments will give people an ‘A ha!’ moment, and then be over.


Github Repo

I elected to build this project with the Motion Stack library. This would allow me to keep these moments simple and in the browser, but still give me access to the sensors on the phone. The Orientation Cube and the RelativeHeading Image Panning functions both seemed particularly relevant. It was easy to build a working demo from the example code. I also built a short demo from a tutorial on accessing the phone camera from the browser.

However, the camera demo did not work well on mobile. I was unable to access the rear camera. After some frustrating debugging I was able to find two problems 1) Accessing cameras from the browser does not work well on older hardware, and even slightly older versions of Chrome. This effected me pretty significantly, because I’m an iPhone user and the phones I was testing with were older hand me downs. 2) The example code I was following on MDN did not seem to work. Adding a constraint to use the rear camera broke without giving any errors in the console.

Using MediaDevices.enumerateDevices() and then selecting the camera was successful. I built based on an example here. Now that I had both elements of the experience working, I could start combining them. That produced three bugs to be fixed. 1) The pano/overlay image covers the whole viewport, so it’s impossible to select the rear camera. This is the most fixable bug, but also the most frustrating because I had just spent so much time getting the camera to work. 2) The pano/overlay image is doubled for some reason. 3) If I remove the audio selectors from the HTML the video fails to load.

Screen Shot 2017-03-12 at 5.59.15 PM

Next Steps

  1. Fix the bugs. They all seem fixable, but annoying.
  2. Get HTTPS site. I currently don’t have one setup, so I can’t host code that uses the webcam until I do.
  3. Work on the UX for the AR experiences specifically. When do I need to explain what glacial striae are? Is there an intermediate screen before the AR? What should be there?
  4. Test with an object. I can’t go to the park every day to test, so building a mockup of some kind to refine the interaction with seems like a good plan.
  5. User test. I need to make sure that this makes sense to other people too.



Tunnel VR

This project is a short experience for Google Tango. I wanted to play with the link between tango and the real world, so I decided to create a VR space where the motion of the user was more controlled. Building some kind of tunnel seemed like a good way to do that. Also, the assignment for this project was to tell a story and moving between two spaces seemed like good way to convey transition.

Like all of the other projects this semester, I spent a lot of time setting everything up. I hadn’t installed anything for developing on Android, so installing that was a pain. Also, finding the file path for the Android SDK and Java SDK, then putting that information into Unity took more time than I would like to admit. I think the only reason that process wasn’t soul crushingly frustrating was that a bunch of us sat down and did it together.

After I got everything installed things went pretty quickly. I decided to build my space using only Unity default meshes. I created and moved a bunch of cubes to create a short stretch of the tunnel, then copy/pasted that section to make a longer tunnel. I wanted to emphasise the change of space for the big room at the end, so I made the room very tall and added the large sphere. I hopped it would be a cool space, but I’m not sure it achieves that. I added the glowing pink balls and the particle systems to up the ‘cool factor’, I think they ended up looking pretty magical. Finally, I made the lighting in the big room very pink and put a teal light at the beginning of the tunnel. That way, you move into a pinker and pinker space.

I like how this turned out, especially so since making I was able to concentrate on building and lighting for a lot of the time instead of fighting bugs. I think that audio is the only thing missing from this little sketch.


MW&MUR Haunting



In thought that it would be interesting to try and make a ‘haunting’ for this assignment. Maybe something where the app reveals a ghost, then the ghost follows you around. That seemed like it could be easy to achieve, since the location services code seemed understandable and straightforward to use. Also, I spent some time building the Unity Roll a Ball tutorial this week, so I was feeling better about Unity too.

I did some YouTube research and I found a tutorial that seemed pretty close to the thing I wanted to build (here). This seemed like a great fit for me. I am still having some trouble understanding how things are constructed with Unity, following this would let me follow along with building the project and then tinker with the project afterwards. Plus, this would be an opportunity to learn how to build to my phone. Great!

I built the example project, but I had a great deal of trouble getting Unity to build the xcode project. After exhausting my google-fu and asking Rui it seemed like the best coarse of action was to rebuild the project I did that, and spent several hours updating unity, xcode, and my os, I finally got a build on my phone working.



The GPS location data doesn’t seem to be working yet, but it seems close. Right now I’m not sure how to go about debugging that problem.

Augmented Object



My original plan for this assignment was to use an image of mountains in the Catskills to create an informational layer that would identify the mountain and perhaps give some information about it. I would love to include an AR component in my thesis, which is largely about the outdoors and this would have been a little experiment for that. However, the images I was working with were not strong trackers and Unity never picked up on the image.

So, I tried something else. I have this pillow with a pretty bold pattern on it:


I thought that it would be neat to try and bring the little faces to life. What are they like? What are they up to during the day when I’m not home? Do they resent being squished all the time?



That plan didn’t work either. Even though the tracking image was pretty strong, it never picked up. Do patterns on a 3D object not work well? Maybe?


Screen Shot 2017-02-08 at 10.07.02 PM


At this point I decided to go for just making something work. Anything. I choose to do this week’s project alone so I would have to do all the work myself and learn about Unity (since I’m a total n00b). I’d hoped to be able to put in some animations and cools stuff, but at this point I was really frustrated and just getting the darn thing working was my main focus. I decided to try augmenting a book, since their covers are flat and graphic and should make good targets. I picked Hyperbole and a Half for my book. The cover seemed like it would work as a target and the cartoon style of the book seemed like something I could replicate with stock shapes in Unity. Maybe I could make the book into the character from the book. Also, the harried, crazy, lost tone of the stories suited my state of mind that this point.



I did eventually get something working, but not as much as I would have liked. It took me some time to get my head around simple things like rotating objects (seriously, it is so counterintuitive!) but eventually I got something working. It’s not pretty, but I did learn a bunch about Unity.

week2 AR from coldsoup753 on Vimeo.