Text
Day Fourteen
Since we have finished our final prototype, today’s task was to set up all of our prototypes in order to demonstrate them all working. I wanted to add some new code that I created to make the Arduino prototypes more efficient. We needed some more items for our prototypes. These included a set of speakers, some longer wires for the distance sensors, and USB extension cables. This was because we are currently only able to use these prototypes at our desks. In our testing, we want to move everything over to the mock wall. It would not be possible as we can’t move all of the monitors.
While we waited for the things we setup the Arduino prototype at our desks. However, there was a big problem. The problem was that the Arduino prototypes are no longer working. The sensors are not calibrating properly and the audio was playing not matter what. We decided that instead of calibrating the sensors every time we restart the Arduinos but instead we calibrate the one time. This should give us a good reading in which we should be able to make it so that the Arduino can read this data and therefore have a set range. We attempted to do this during the afternoon, but to no avail and we will just continue this tomorrow when we have more time.
0 notes
Text
Day Thirteen
Today we finished our third and final prototype. This was easier said than done! Originally, I thought it would be easy to just implement the audio part of the prototype however when we tried to enable it we ran into a few problems. Initially, I wanted to have the sphere scripts contain all the audio but this would’ve caused issues further down the line. Therefore instead we added the audio to each of the objects respectively. We added a time delay to the code so that the audio wouldn’t play at the very start. To do this we used threading. This basically stopped everything that is being processed as Unity is a single threaded. Another problem we were having was that the audio played at the start and it did not loop. This was easily fixed by checking a couple of tick boxes, in which we spent over an hour trying to figure out!
As pausing the thread is a bad move, we tried to use a different method for the audio delay. We used a co-routine function, however in order for this to work, when calling it the audio must be already playing even if the sphere was not touching any of the objects. We also tried many other methods including this function but none seemed to work. Instead of wasting more time, we ended up going back to our original solution.
At the end of the day, we started to look at planning the report and for what to write in each of the sections. Tomorrow we will have to start working on setting up each of the prototypes and sorting out everything we need for the user study.
0 notes
Text
Day Twelve
Today has been relatively productive. The problem that has plagued us this morning was the fact the sphere being controlled by the PSMove was not centred on the mock wall in Unity. This meant that the fireplace object was pressed on startup, which would trigger audio in the future prototype. The sphere was a static object due to the PSMove tracking and could not center it that way. Instead we decided to move the wall to make that center instead. After trial and error it was centered properly.
The next step was to get the sphere to interact with the mock wall. This is easier said than done. We looked at and tried different methods, including changing the material of the sphere itself and using the OnCollisionEnter( ) function. This should've worked but it didn’t. We considered that due to the fact that the mockwall was built out of planes and thought that it might be that. We turned the mock wall into 3D shapes. This however didn’t change our situation.
After Lunch we tried to attempt to solve this problem again. We decided not to use OnCollisionEnter( ) but instead we used a different function called OnTriggerEnter( ). This function triggers an event when the sphere entered the other objects. In the documentation it says that ateast one of the two objects needed to have the IsTrigger box checked. Initially we made it so that these objects would disappear. This actually worked! After that, we made it that when the sphere came in contact with the other objects, the sphere would change colour. We had it that each of our objects had a different colour. This was to differentiate between the different objects. As well as this we made it so that when the sphere left the object it would change back to the original colour (which was white) and this was done using the OnTriggerExit( ).
Tomorrow i would like to finish this prototype by adding some audio to it.
0 notes
Text
Day Eleven
Today has been a learning experience. We were introduced to the PSMove. The tracking software is running on an iMac in which you interface with the application with different commands. This is something that I am comfortable in doing and is used to doing. We were also introduced to JSON (JavaScript Object Notation), this is a language that I have heard of but have never used. I spent the morning researching this new language as I thought it was pretty interesting.
In terms of the development setup, the iMac running the tracking software is going to be a server with my machine acting as a client. On this client machine, we can use Unity with C# scripts to interface the audio for each object. The connection between my machine and the iMac is via Ethernet in which JSON is used to send the coordinates from the tracking software. In Unity, the scene is comprised of a sphere (acting as the PSMove controller) and it moves accordingly. The software that tracks the PSMove is actually really cool. I looked through the scripts to see how everything worked just for my own understanding. The software out of the box inverted the direction of the controller (when the controller went left, the sphere would go right). To correct this, I just inverted the coordinates. Furthermore, the software doesn’t include the Z- coordinates but this doesn’t matter as in Unity the sphere just travels in the X and Y planes. For simplicity sake, we are just going to use the X and Y coordinates.
To help us visualise the sphere interacting with the paintings, Fraser created the mock wall in Unity. This is a just a prototype to test if we can trigger the audio when the sphere touches an item. In later stages, the mock wall will be scaled appropriately. We tested the wall with the PSMove tracking software to which we discovered that there is an alignment issue with the sphere. This can easily be changed. Meanwhile, I searched the web for a C# audio library. There are several that I discovered but I think we are going to use the NAudio library because it is well documented as well as has decent support.
In terms of tomorrow, I would like to get a working prototype for one of the paintings. This will allow the rest of the week to make sure all the other prototypes are working and will allow us to prepare for the user studies next week.
0 notes
Text
Day Ten
Today has been a whirlwind day! Last night I changed the code to use a different audio library and from my primary testing it was working better than the previous library. There was one slight issue however, I forgot to push it to GitHub. This meant that for a presentation today we couldn’t use a better library, which meant that we had to get rid of the looping in the code. This isn’t a big feature but it is just nice to have. Today we spent the whole day preparing for the presentation while also making sure that our prototypes actually worked.
In the morning things we going a bit slow and frustrating. The code refused to work despite somewhat working the day before. I spent the morning trying to get the basic functionality of the prototypes to actually work. Once I had figured out some of the bugs I decided to go for an earlier lunch before I drove myself crazy.
After Lunch, we went straight back to it. I figured out why my calibration code was not working. Python cannot pass by reference which is what I was attempting to do so therefore the values were not being inputted correctly. Once I had changed the code so that the calibration data actually saved to a variable and the rest of the prototype jumped into life.
We then practised our presentation and made sure that all the code actually worked properly. The presentation was useful because it allowed us to get some feedback about our prototypes. This weekend I am going to implement my code that I did last night and add it to the code that works. Hopefully this will make the code less resource heavy as well as adding the looping features. I will also make sure that the code actually does push to GitHub this time!
0 notes
Text
Day Nine
Today was a bit of a slow day. I wanted to get the two sensor prototype to work correctly and start on the second Arduino prototype. This was an almighty challenge as everything just went wrong. Firstly I had to think of a way for the Python script to be able to differentiate between the two sensors so that it plays the correct audio. This was simple enough, I added a prefix (for example S1:) to each Serial output from the Arduino side. This allowed me to use Python’s own libraries to search for the prefix and separate it from the information which was then passed into a different function. Next it was a case of playing the two streams of audio, fortunately I could use Pyglet and create two media players, each with their own audio. When I went to test this out there was a lot of cross talk with the audio. I realised that a Calibration function that I had built was not working correctly (in both C and Python) so I tried to debug it. I still do not know what the issue is on the Python side of things.
Meanwhile, Fraser was working hard on the Presentation for tomorrow. I helped him with it. I then decided to move away from the two sensor prototype and move to the second prototype. This was a slightly easier prototype as it only involved one depth sensor with a “pressure pad”. We had completely forgot about the pressure pad so I quickly mocked up the pressure pad using some post it notes. I works but not the best - when we test these solutions properly we will have a proper pressure pad. In C, I told the program to output 2000 for on (if the pressure pad is pressed) and 3000 when it isn’t. These values were chosen as the sensors can only reach 1023 so there will not be a cross talk - this still doesn’t matter as both sensors have different prefixes.
Another bug that we have is the looping of the audio if someone is away from the sensor for too long. I think this is due to how the Pyglet library works and I will look at implementing a new library tonight.
0 notes
Text
Day Eight
This morning me and Fraser wanted to find an audio library that would allow for the audio to be paused and then resumed due to the sensor. After scouring the internet and trying out several different Python library to no avail we found a library that actually worked. The library that we are going to use for the Arduino prototypes is Pyglet. This is a easy to use as well as nicely documented Python library. After testing this library on a test script we incorporated it into our main script to be used with the Arduino. It did what it we wanted it to do.
However this wasn’t enough. We wanted to have the audio to be looped so that when people walk up after the audio has finished it will restart. This was pretty simple to do with Pyglet’s built in functions but there was still a problem. It would just loop the audio without giving an indication to the user that it has finished and they should move on. As well as that if you walked out of range it would simply pause it no matter the length of time, this introduced the problem that if someone moved onto the next sensor it would just pause the audio.
After Lunch, Fraser and I decided to split into separate tasks to cover more ground. I decided to fix the audio loop and try to find a way to get the python script to differentiate between two sensors. I managed to model my own audio loop and tried to incorporate the RGB LED to visually alert the user to move on. The latter part is still causing me issues because if I send a certain number via the serial connection, the Arduino is not registering it at all. This problem has taken me all afternoon to try to solve and maybe should’ve managed my time a bit better. Meanwhile Fraser tried to get the sensor to output the actual distance in cm. He used several different libraries to test to see if this was possible. He discovered that the library he was using only worked for some distances. We subsequently decided to put our grid system on hold as the sensors do not have a suitable range to be used.
0 notes
Text
Day Seven
In the morning, we looked over what we had done the previous day. When I went to use the depth sensor again it did not work. It kept not calibrating as it should. I decided to rewire my breadboard to see if that worked. It did not. So I replaced my sensor with a different one and it worked fine. After that I wanted to have some kind of visual cue to show that the sensor is calibrating. This took some trial and error until I thought it was just right. This took us all morning to troubleshoot the Arduino.
After Lunch, we went to work to find out some way that we could open/play some audio when someone is in range of the sensor. We did a lot of research into different methods such as Processing. However we settled on using Python with Serial ports as this was a language that we were both experienced with. This method is relativity simple, the Python script turns on the Arduino. The Arduino then returns the range after calibration. The script then receives the serial output from the sensor. If the value is within the range it will play the audio. We are currently using winsound. This is a very simple library to use but does pose a couple of issues:
This only works if you are using a windows machine - not a big issue at the moment but could be if rolled out.
Currently when the audio is called it will play the whole clip even if the object is no longer in range
There isn't a way in which you can pause/stop the audio
These issues are what we are planning to solve tomorrow as well as think of a way in which we can interface with the fireplace scenerio as i don't believe that a depth sensor would work in this case. As well as that, we may explore the use of using a Bluetooth Shield that can trigger the audio without being tethered to a machine.
0 notes
Text
Day Six -- The start of the 2nd week
This morning Fraser and I got straight to work. We wired up the Arduino to the depth sensor. This was relatively simple to do. The serial monitor displays a load of values in which the sensor is picking up. As we didn’t know what these numbers were in terms of distance we decided to conduct a little experiment. We took each sensor and put an object in front of it every 10cm until we reached 1m (the sensor range is from 10 - 70cm) to see if we could decode what these numbers were. We did the whole experiment to only realise that we hadn’t calibrated the sensor, which meant we had wasted the whole morning. Once we set up the calibration we were still noticing random varied numbers in the serial monitor.
After lunch, we received feedback on what we had done in the morning. We then learnt that the sensor is always going to have noise and will therefore jump around a lot. So instead of up a scale we decided to use the calibration and set up a range that will trigger an event if an object is in inside it. To learn more about calibration, we decided to make a light theremin. This was a fun exercise but the sound was extremely annoying after a while. Once we were happy with our results of the theremin we then used the RGB LEDs.
RGB LEDs are really cool but setting it up was a bit of a challenge. For some reason in both of our setups we had a gremlin that wouldn’t light up the RGB in any colour. After we took everything apart and put it back together again it finally worked! We had an idea that we could use the depth sensor and RGB LED to tell the user if they are in range or not. After we set it up, I modified the RGB code and depth sensor code so that it will light up green if the user is in range or red if they are out of the range. Currently, it works only if you calibrate it a small distance from the sensor. Hopefully tomorrow it will be a quick fix and then we can move onto triggering some audio when someone is in range.
0 notes
Text
Day Five
After receiving our kits yesterday, we set off to work on the exercises provided with our kit. The first exercise that we started with was the blinking LED. This exercise was a good introduction to using the Arduino. Initially when first wiring up the Arduino I got a bit confused with the breadboard layout but once I had finished I understood it. Once I had the initial setup complete, I messed around with the code. I changed the delay times. I also played around with fading the light in and out. Furthermore, I also built my own (although not as smooth as the built-in fade) fade. I modified how the fast it lit up as well as the increment of the intensity of light. Next, we did the 8 LED exercise. I enjoyed this exercise the most (although I had to re-wire it because my layout was a bit wrong) as I created different patterns with the LEDs. The next circuit we did was using a button to switch an LED on/off. Fortunately everything worked first time round. I modified some code so that it would fade-in and out at a push of a button - something that I would like to see incorporated into a prototype.
After lunch we spent some time messing around with the potentiometer exercise. We then started to mind map some other ideas that we could prototype for the project. An idea that Fraser came up with a pressure-pad trail. This would work by having a set path in which there would be pressure pads that would trigger each audio event. Another idea that we came up with is using AR to display information on top of the selected objects without obscuring the view. As well as that we consider maybe using some kind of magnetic sensor in which magnets would be used to trigger the audio events. However this needs a lot more research.
In terms of next week we will be using the distance sensors in our prototypes.
0 notes
Text
Day Four
When we got the assets, each section had several different files to it. Due to the fact that we didn't want to download every single asset, we spent some time stitching together each section so that each section only had a single audio file. We used the program Audacity to do this. Once we did that, we used the NFC tools app to load each NFC tag with its corresponding audio file. This posed a few problems:
All the assets had to be downloaded locally to the device itself
You needed the NFC tasks app to point to where the files where
The location of the files themselves have to be in same place otherwise it would cause an error
Although this is reasonable for what we are doing, we wanted to produce something that would work on any NFC enabled device. After a bit of thinking, we decided to use the free online website builder: weebly. This allowed us to upload the files and exploit (most) web browsers built in players to stream the audio. This method was quick and simple to do -- we also used Weebly because it allows for free subdomain hosting which allowed us to host the files there.
Personally, I like this solution. It is simple and easy; there is no requirement for someone to download a specific app. If this solution were to be used in a real scenerio it would be better to build your own audio player and host it instead of using a free website generator. This is due to many factors such as:
Better quality audio - at the moment the files are compressed to not exceed weebly's file size limit
Auto-play - on the version of Chrome that we tested on Mobile it would not play automatically. This is just a feature that i personally feel is good to have
Scalable
In the morning, we also brainstormed some of our questions that we would like to ask someone when they test our prototype. We are going to be asking these questions throughout our iterations of prototypes so we kept the questions as general as possible (instead of asking something like: “Do you use NFC?” because you can’t ask “Do you use distance sensors” to most people). We also included quantitative questions so we could note trends in our different prototypes. As well as that, we included some opinionated questions that allowed us to receive feedback and improve if necessary.
In the afternoon, once we received some feedback from our questions, we started to research the Likert scale. This is a scale that is used in many questionnaires as it others more than a ‘yes’ or a ‘no’ answer. After doing some research, we decided to change some of our questions to statements that incorporate a scale from 1 - 5 (1 being strongly disagree to 5 being strongly agree). As well as that, we decided to keep some of our opinion-type questions to get some qualitative feedback.
Once we had improved our questions, we got the Arduino kits that we will be using tomorrow and next week. I am looking forward to using these kits. After doing some research about Arduino reading to start using it tomorrow. Finally, we got some more feedback on our questions. There was some confusion that we had but we have since rectified the issue.
0 notes
Text
Day Three
For most of the morning, Fraser and I decided to mock up our project on a wall. This would allow us to test the placement of the tags as well as testing the application of the tags. The wall in which we have mocked up is smaller than the actual wall, so we had to scale our model down. We used black tape to outline the various objects as well as post-it notes to label the outlines. Once that was complete, we erased all of our tags and re-wrote to them their corresponding number. We have just recieved the assets for the project and will continue to experiment with this system tomorrow
Instead of sitting around doing nothing we decided to finish off the Python game that we made yesterday by debugging it and adding comments to it. I then uploaded the code to Github for anyone to use. At the moment, it will only run for a single game - to improve it further i would like it to repeat more than once. Maybe we could create a tally file that would allow for games to be saved in the future?
After finsihing the code we decided to brainstorm different ideas of encasing these tags to make them more pleasable to the eye. We sketched several iterations of this “pad” to see what it would look like. We decided that you would need something square so that someone’s phone could easily recieve the information from the tags. We want the pads to be made of cheap and easy to use materials to represent the technology that is inside them. The good thing about the tags is they are passive and can therefore be used anywhere. We also compared the advantages and disadvantages of using NFC.
Tomorrow we will use them to create a working prototype. Furthermore, I would like research different ways to interface with the tags without using the app.
0 notes
Text
Day Two
When we first came in we looked over our Python code from yesterday and made changes to it that allowed it to be executed from the console. This is something that I know how to do but was useful as I don't normally use Python. Once we reviewed our code, we were set with a coding task. Using Python, we had to code a game of tic-tac-toe. Fraser and I allocated each other roles to finish this project; I worked on the game mechanics whereas Fraser worked on the UX/Debugging. I feel that this was a good pairing. We worked on our code until lunch but once we returned our heads were still hurting from a bug in our code.
In the afternoon, we spent some time working with the NFC tags. We used the Nexus 4s provided and the four NFC tags to play around with the tags to see how they worked. To interface with the tags we used an app called NFC Tools; this app allowed the tags to be written/read. The app was super easy to use and would be easy to use in our project. While we were figuring out the app we decided to do some tests to see how the phones interacts with the tags. We measured the distance from the phone to NFC in which it first triggers a response as well as the surface area of the NFC reader on the phone. As well as that we wondered if having a case on your phone makes a difference or not. We discovered that it did in fact make a difference (although a very negligible difference). Once we concluded our testing, we did some research into NFC. This will help us when we start writing our reports. Overall, I am fairly confident in using the NFC tags as well as knowing how the work.
Furthermore, we went back to the code. Fraser fixed his bug in one of his functions and I concluded with the game mechanics. At the moment there is only a single bug, which I believe I can rectify later on. Tomorrow I would to achieve the following: create a scale model of the wall, do some rapid prototyping with the tags. I would like to use the app to modify the tags and once that is complete I would like to experiment with different ways of interfacing with the tags.
0 notes
Text
Day One
Fraser and I were given a tour around the Computer Science Department at the University of Bath in which I will working at for the next few weeks. There are a lot of stuff in this department that I would like to take home with me! After the tour, we were given our project brief: Create an Interactive Narrative Kiosk (InK) for Dyrham Park, in which there are 4 objects that will trigger some audio. The problem we are faced with is how to trigger this audio.
Over the course of the next few weeks we are going to be creating 3 different prototypes: NFC tags, Arduino, and Unity/PSmove. I created several notes based on these three approaches, in which I detailed the advantages and disadvantages for each. At the moment, the best options to use are the NFC tags or the Arduino system. Personally, I think that the Arduino system with a depth sensor is going to give the best experience as I believe it is the most natural way of interfacing with the system. On the other hand, I don’t believe that the PSmove will be the best option as I do not see it as a viable ‘real world’ scenario. While discussing our prototypes with Fraser, he pointed out that a depth sensor may not be able to differentiate the difference between the fireplace and a painting -- this is going to be an interesting problem to solve.
After lunch, I had to reinstall Visual Studio as it is a terrible IDE! Once that was all fixed I spent some time writing a little Python program that takes in a load of inputs, output to screen and then write to a text file. This was pretty fun as I got to code something. I also installed my personal favourite text editor Visual Studio Code to use as I feel the most comfortable in this program. After that I was going to write the same program but in Java. However, it has been a long while since I have used Java - so I spent the time going over how Java works again to refresh my memory.
Overall I am really looking forward to the next few weeks to see what Fraser and I develop! I also hope that I will not be late ever again!
0 notes