Tumgik
mjc-hcr-blog · 6 years
Text
12/11/18 - Navigation + Photography
Today we got the navigation and photography software working in the new physical form. Initially, we were having trouble with turning. Forward and backwards motion worked, but turns would barely move the bot. We eventually figured out that the turning problems were due to the new, acrylic bot being much heavier than the cardboard prototype. Our original wheel setup could not apply enough force to turn the heavier bot on the carpet of SciLi 8. We solved this problem by moving from a 9v wheel motor power supply to a more powerful battery.
Tumblr media
After some tuning for the higher powered battery, we got the full navigation and photo stack working. You can see in this video that the bot recognizes where I am, (slowly) turns towards me, drives forward and takes a picture. Once it is done taking the picture, it makes a big turn to start searching for other people.
youtube
The long pauses between movements in this demo are due to the slow speed of object detection running on the Pi. We only get one frame every 2 seconds, so for safety, we need to take small steps and pause to wait for new observations about the environment. This would be much faster and smoother with a higher-fps detection model, which I was planning to implement on the Neural Compute Stick. However, that ended up being much more complicated than I anticipated, so didn’t have time to finish it. In future iterations with the NCS, we could potentially get 20 fps, up from 0.5 currently.
4 notes · View notes
mjc-hcr-blog · 6 years
Photo
Tumblr media Tumblr media Tumblr media
Today we assembled our final prototype! This included putting on the mirror film, screwing all the parts together, covering the back pieces in velvet, and attaching/connecting the electronics. There are still a few more refinements to be made before Wednesday, such as covering up the electronics in the back, but it’s almost done!
4 notes · View notes
mjc-hcr-blog · 6 years
Text
12/7/18 – CandidBot Navigation
We made progress yesterday combining the object tracking and navigation software with our cardboard prototype. We attached the Pi, display and camera to the prototype.
Tumblr media Tumblr media
We then got the camera working with the display.
youtube
We also got the camera feed pipelined into the object tracker. Here you can see an image from the camera on the bot detecting the people in the room behind us.
Tumblr media
Then, we worked on motion. If the bot can’t detect a person to approach, it turns in circles until it finds someone:
youtube
When the bot does detect someone, it approaches them to take a picture
youtube
3 notes · View notes
mjc-hcr-blog · 6 years
Text
undefined
tumblr
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Many updates today on our yet-to-be-renamed CandidBot! Between 9am and 12pm this morning, our project underwent a complete 360 turn and is finally starting to come to life :) We received the acrylic piece for the final version of our robot, created transparent wheel cases by bending the pieces of acrylic we had previously laser cut, assembled the wheels both into the cases and against the bottom of our robot’s base, put together a support system that will keep the structure upright and avoid the acrylic flopping around, and got the wheel software and accompanying hardware going to see the full structure move within our final design! Very exciting! 🎉🎉 We also made plans for next steps happening this evening, on Friday and throughout the weekend. Hopefully by Monday our final iteration will start looking very sleek :)
5 notes · View notes
mjc-hcr-blog · 6 years
Text
11/28/18 – Tracking
We realized that for the bot to identify and approach groups of people to take their picture, it would have to be able to recognize that the people it’s detecting are the same over multiple timesteps. This is because it the bot will have to identify someone to approach and continue to approach that same person over the time it takes to move to them.
To accomplish this, I merged our object detection model with a Kalman Filter tracker. The tracker looks at the centers of the detected bounding boxes and, based on their movement, predicts where it thinks they will be in the next frame. When it receives the detected bounding boxes from the next frame, it compares its predictions vs the new boxes to determine which boxes identify the same person between frames. In the video below, you can see by the color coding that the tracker is able to uniquely identify Matt and Sol over almost the entire video.
youtube
4 notes · View notes
mjc-hcr-blog · 6 years
Video
undefined
tumblr
Finally got photos to upload from the pi to google drive. I had run into some authorization issues, but I just plugged in the pi to a monitor and now uploads work without a hitch
3 notes · View notes
mjc-hcr-blog · 6 years
Text
11/14/18 – Object Detection
Today I managed to get the YOLOv3 object detection network running on both my laptop and our Raspberry Pi. Here’s a sample video of it detecting me (and some other people) in the Sci.Li.:
youtube
On my laptop, detection runs at ~30 frames per second, but on the Pi, we only get about 0.5fps, which isn’t enough for real-time detection. To solve this, we will incorporate a neural compute stick (a tiny external GPU), which will speed up prediction significantly.
Tumblr media
Next, I will use a clustering algorithm on the bounding boxes to identify groups of people and direct the bot where to go.
4 notes · View notes
mjc-hcr-blog · 6 years
Video
undefined
tumblr
Here’s a video of the base of the ‘CandidBot’ in action! The bot is programmed to go straight for 2 seconds, then pause for 3 sections to take a photo, turn randomly for 1 second, and then repeat that cycle. It has a distance sensor to make sure it doesn’t crash into anything. If it comes close to an obstacle, it backs up and turns 180 degrees, and then starts the original cycle again.
Tumblr media
4 notes · View notes
mjc-hcr-blog · 6 years
Text
Tumblr media Tumblr media Tumblr media Tumblr media
A lot of great progress was done today in bringing our newly-named, potentially-to-be-renamed “CandidBot” to life! We separated our six-person team into three groups for the day: camera operation, wheel operation, and design/construction. Matt and Ben focused their efforts on camera programming. As of right now we currently have the camera continuously taking pictures and saving them to the Pi. Ben is now working on auto-uploading images onto Google Drive, while Matt is focusing on object detection, which we will use to determine when “CandidBot” should snap a photo. On the wheel development end, Sophie and Annabel did a great job in programming our previous Arduino car to move forward, wait for a few seconds (while the camera will be taking the photo), turning around a number of degrees and repeating the process, as well as making the car interact with its surroundings by backing up, turning around and proceeding towards a different direction whenever the ultrasonic sensor detects it is about to “collide” into something. Regarding design and construction, Pooja and I (Pauline) assembled all the main components that will be needed for this iteration of “CandidBot”. We started off with the triangular suspension pole that will be hosting the camera, drilling a hole for when it will be attached and scraping off/continuously scoring a square-shaped area around it to accomodate the base of the camera. We also built the triangular base that will replace the car when we move the wheels onto our actual prototype. This base will sustain the camera pole as our robot moves around. For this part of the process, we included two flat triangular layers and two similar layers with a triangular cut-out where the pole will be sitting. What remains to be done for the base is the inclusion of supporting walls in the inner lining of the cut-out triangle, in order to add additional stability for the camera. All in all, the group has been achieving great things and “CandidBot” is slowly starting to take shape :)
4 notes · View notes
mjc-hcr-blog · 6 years
Text
11/4/18 – Wheeled Robot Ideas
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
0 notes
mjc-hcr-blog · 6 years
Text
10/31/18 – Costume Car
Today in class, Bang and I extended our car robot by putting the proximity sensor on a servo. When the sensor recognizes that the car is too close to a wall, the servo moves the sensor in a sweep to determine the angle of the car relative to the wall, so that the car can turn in the correct direction to move away.
youtube
Following this, in an attempt to make the car ~spooky~ for Halloween, we merged it with my previous angler fish project.
Tumblr media
youtube
1 note · View note
mjc-hcr-blog · 6 years
Text
10/29/18 – Arduino Car
Today in class we finished the basic car assignment. After assembling our car, we successfully managed to drive it in a square.
youtube
After this, we added a proximity sensor to enable it to avoid hitting obstacles. 
Tumblr media
It wasn’t calibrated correctly at first.
youtube
However, after some modifications, we were able to make it successfully avoid obstacles and drive around the room.
youtube
1 note · View note
mjc-hcr-blog · 6 years
Text
Pulley System (con’d)
We added stops into the pulley system to prevent it from moving out of our desired range of motion using masking tape. We connected the bar to the 3D printed mechanism, which made it easy to create the pulley system with string. The string was strung through a plastic piping, which we concealed around the ear pieces. We also planned out the ear pieces so that they would be point where we can activate the voice control system.
Tumblr media
String attachment:
Tumblr media
Pulley attachment:
Tumblr media Tumblr media
youtube
3 notes · View notes
mjc-hcr-blog · 6 years
Text
Iron Man Mask: Pulley system progress
Have 3D printed the correct size and shapes for the metal bar holders that allow us to connect to the helmet. The major challenge now is orienting the end pieces (black pieces on either end of the metal bar) so that the mask opens and closes in a uniform arc. Right now, it can be correctly positioned in a fully closed or fully open state but not both. I think this is mostly due to the fact that our metal wire is not bent at exact angles. We are going to try rebending the wire or replacing it. 
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
3 notes · View notes
mjc-hcr-blog · 6 years
Text
10/19/18 – IronMask Voice Control
Since we are now using a Raspberry Pi, we switched to a USB microphone. This will allow us to easily record an audio file directly from python on the Pi.
Tumblr media
I spent a bunch of time debugging Linux microphone drivers to get it working. Once it worked, I was able to record a fixed length audio file, send it to our wit.ai app and control the servo.
youtube
After this, I hooked up recording with a button so we could record variable length commands. The Pi starts recording when the button is pressed down. When the button is released, the recording stops and is automatically relayed to wit.ai.
youtube
Recording quality and response time are still a bit iffy, so next steps are to improve those if possible. After that we need to get this working on the Pi Zero and integrating with the physical helmet to move the mask.
Tumblr media
1 note · View note
mjc-hcr-blog · 6 years
Text
IronMask: Backend Progress
On the software side, the biggest change we made was switching from the Arduino Uno Wifi to the Raspberry Pi 3. The Raspberry Pi was better suited for our project because it allowed us to call the Wit.ai API directly using a python script instead of setting up and communicating with a separate server.
Additionally, we also added a button that the user will press while giving any commands. Otherwise, we would have had to either continuously stream and send to the Wit.ai API, or deal with when/how to start recording (when mic input is above a certain volume?), stop recording, and chunk/cut the audio.
One roadblock that we’ve had is the mic: in order to use the amplifier we ordered for the Arduino with the Raspberry Pi, we would have needed a separate component to convert the output. We decided to get a small USB mic for the Raspberry Pi instead, but we haven’t been able to test recording directly on the Raspberry Pi because a substitute mic that we tried for testing only wasn’t compatible.
Backend User Flow:
User gives a command while pressing the button.
E.g., “mask close,” “open the mask please,” etc.
When the button is initially pressed, we start recording the input from the mic. We don’t stop recording until we detect that the user has released the button.
When the user releases the button, we cut the audio and send it to the Wit.ai app.
The Wit.ai app transcribes the audio into text and uses NLP (natural language processing) to determine the intent of the text.
We look for two parts of user intent: Whether the subject is “mask” and whether the intent is “on” or “off.”
If the intent is determined to be “mask on” or “mask off,” then a servo moves to lift the mask on or off accordingly. If the subject is not “mask,” nothing happens.
Completed Components:
Wit.ai app
API interaction works as expected
Issue: Sometimes returns error/empty but returns a good response on the next run with the same inputs - function set to re-call the API 3 times if a bad response is returned
Button
Initial press detected
Function that returns whether or not the button is currently pressed (will continually be called in recording function until it returns false)
Servo
Moves 0-180 depending on Wit.ai response (open/close)
Todo:
Switch to Raspberry Pi Zero in order to fit into the mask
Test/connect new mic
Calibrate servo to work with the mask open/close system
youtube
Tumblr media
2 notes · View notes
mjc-hcr-blog · 6 years
Text
Putting the mask together
After three layers of resin and sanding, we spray painted the pieces to give it a smooth look. Using Zap-a-gap, we then started putting the helmet together. 
Tumblr media Tumblr media
We put together the back of the helmet as that piece will not have moving hardware inside.
Tumblr media Tumblr media
In order to get the helmet to fit snug, the chinstrap must be removable. Thus, I used velcro to allow us to easily take off the helmet.
Tumblr media
The next steps are to put the arduino and mechanisms inside the helmet.
3 notes · View notes