Don't wanna be here? Send us removal request.
Text
12/11/18 - Navigation + Photography
Today we got the navigation and photography software working in the new physical form. Initially, we were having trouble with turning. Forward and backwards motion worked, but turns would barely move the bot. We eventually figured out that the turning problems were due to the new, acrylic bot being much heavier than the cardboard prototype. Our original wheel setup could not apply enough force to turn the heavier bot on the carpet of SciLi 8. We solved this problem by moving from a 9v wheel motor power supply to a more powerful battery.

After some tuning for the higher powered battery, we got the full navigation and photo stack working. You can see in this video that the bot recognizes where I am, (slowly) turns towards me, drives forward and takes a picture. Once it is done taking the picture, it makes a big turn to start searching for other people.
youtube
The long pauses between movements in this demo are due to the slow speed of object detection running on the Pi. We only get one frame every 2 seconds, so for safety, we need to take small steps and pause to wait for new observations about the environment. This would be much faster and smoother with a higher-fps detection model, which I was planning to implement on the Neural Compute Stick. However, that ended up being much more complicated than I anticipated, so didn’t have time to finish it. In future iterations with the NCS, we could potentially get 20 fps, up from 0.5 currently.
4 notes
·
View notes
Photo



Today we assembled our final prototype! This included putting on the mirror film, screwing all the parts together, covering the back pieces in velvet, and attaching/connecting the electronics. There are still a few more refinements to be made before Wednesday, such as covering up the electronics in the back, but it’s almost done!
4 notes
·
View notes
Text


Pooja, Luke and I met last night to continue putting together the pieces for our last iteration of CandidBot. After much effort we were able to coat the main structure and support pieces in velvet. We also assembled the wheels. It seems the mirror front will need a replacement, which we will hopefully get to this weekend!
0 notes
Text
12/7/18 – CandidBot Navigation
We made progress yesterday combining the object tracking and navigation software with our cardboard prototype. We attached the Pi, display and camera to the prototype.


We then got the camera working with the display.
youtube
We also got the camera feed pipelined into the object tracker. Here you can see an image from the camera on the bot detecting the people in the room behind us.

Then, we worked on motion. If the bot can’t detect a person to approach, it turns in circles until it finds someone:
youtube
When the bot does detect someone, it approaches them to take a picture
youtube
3 notes
·
View notes
Text








Many updates today on our yet-to-be-renamed CandidBot! Between 9am and 12pm this morning, our project underwent a complete 360 turn and is finally starting to come to life :) We received the acrylic piece for the final version of our robot, created transparent wheel cases by bending the pieces of acrylic we had previously laser cut, assembled the wheels both into the cases and against the bottom of our robot’s base, put together a support system that will keep the structure upright and avoid the acrylic flopping around, and got the wheel software and accompanying hardware going to see the full structure move within our final design! Very exciting! 🎉🎉 We also made plans for next steps happening this evening, on Friday and throughout the weekend. Hopefully by Monday our final iteration will start looking very sleek :)
5 notes
·
View notes
Text




On Monday we worked on our cardboard prototype of the final robot, and focused on how we will connect the wheels to the base for both this and the final iteration.
1 note
·
View note
Text
11/28/18 – Tracking
We realized that for the bot to identify and approach groups of people to take their picture, it would have to be able to recognize that the people it’s detecting are the same over multiple timesteps. This is because it the bot will have to identify someone to approach and continue to approach that same person over the time it takes to move to them.
To accomplish this, I merged our object detection model with a Kalman Filter tracker. The tracker looks at the centers of the detected bounding boxes and, based on their movement, predicts where it thinks they will be in the next frame. When it receives the detected bounding boxes from the next frame, it compares its predictions vs the new boxes to determine which boxes identify the same person between frames. In the video below, you can see by the color coding that the tracker is able to uniquely identify Matt and Sol over almost the entire video.
youtube
4 notes
·
View notes
Text




Today in class we started off the day by trying to build a prototype of the final version of our robot. However, as we went on with building it a series of concerns arose regarding how to translate the cardboard material into acrylic, which prompted us to reconsider how to shape our design in a more simplified manner. Different ideas came up from our brainstorming session and moving forward it was decided that we will be contacting Bud and researching material prices before next class. Hopefully on Wednesday we will already have another solid game plan to continue moving efficiently towards our final critiques!
1 note
·
View note
Video
Finally got photos to upload from the pi to google drive. I had run into some authorization issues, but I just plugged in the pi to a monitor and now uploads work without a hitch
3 notes
·
View notes
Text
11/14/18 – Object Detection
Today I managed to get the YOLOv3 object detection network running on both my laptop and our Raspberry Pi. Here’s a sample video of it detecting me (and some other people) in the Sci.Li.:
youtube
On my laptop, detection runs at ~30 frames per second, but on the Pi, we only get about 0.5fps, which isn’t enough for real-time detection. To solve this, we will incorporate a neural compute stick (a tiny external GPU), which will speed up prediction significantly.

Next, I will use a clustering algorithm on the bounding boxes to identify groups of people and direct the bot where to go.
4 notes
·
View notes
Text
Today we also took our robot downstairs to the Scili lobby to observe how it would behave in a wider environment, as well as to get some insight on how some people were able to experience the device. Here you still see the first-hand account of the video filmed by our robot!
4 notes
·
View notes
Text
Today we assembled and tested out our robot’s first proper iteration. We opted not to use the Pi camera in this test for simplicity’s sake and therefore used an iPhone that continuously filmed instead of taking photos upon face detection. The overall result was very good! Moving forward the next concerns to address will be changing some of the wheel functionality to accommodate the new, wider base (small turns that were previously enough to reposition the car barely shifted the full robot) as well as adding to base stability and a more natural interaction with people around the structure. Another addition will be to migrate to the Raspberry Pi camera and features the guys have implemented alongside it for an experience as faithful as possible at this point to the end result we are striving for.


4 notes
·
View notes
Video
Here’s a video of the base of the ‘CandidBot’ in action! The bot is programmed to go straight for 2 seconds, then pause for 3 sections to take a photo, turn randomly for 1 second, and then repeat that cycle. It has a distance sensor to make sure it doesn’t crash into anything. If it comes close to an obstacle, it backs up and turns 180 degrees, and then starts the original cycle again.

4 notes
·
View notes
Text




A lot of great progress was done today in bringing our newly-named, potentially-to-be-renamed “CandidBot” to life! We separated our six-person team into three groups for the day: camera operation, wheel operation, and design/construction. Matt and Ben focused their efforts on camera programming. As of right now we currently have the camera continuously taking pictures and saving them to the Pi. Ben is now working on auto-uploading images onto Google Drive, while Matt is focusing on object detection, which we will use to determine when “CandidBot” should snap a photo. On the wheel development end, Sophie and Annabel did a great job in programming our previous Arduino car to move forward, wait for a few seconds (while the camera will be taking the photo), turning around a number of degrees and repeating the process, as well as making the car interact with its surroundings by backing up, turning around and proceeding towards a different direction whenever the ultrasonic sensor detects it is about to “collide” into something. Regarding design and construction, Pooja and I (Pauline) assembled all the main components that will be needed for this iteration of “CandidBot”. We started off with the triangular suspension pole that will be hosting the camera, drilling a hole for when it will be attached and scraping off/continuously scoring a square-shaped area around it to accomodate the base of the camera. We also built the triangular base that will replace the car when we move the wheels onto our actual prototype. This base will sustain the camera pole as our robot moves around. For this part of the process, we included two flat triangular layers and two similar layers with a triangular cut-out where the pole will be sitting. What remains to be done for the base is the inclusion of supporting walls in the inner lining of the cut-out triangle, in order to add additional stability for the camera. All in all, the group has been achieving great things and “CandidBot” is slowly starting to take shape :)
4 notes
·
View notes
Text





During class on Monday the “telepresence” group had a chance to brainstorm some ideas for our project. Everyone seemed to be interested in the “Event Photographer Robot”, so we starting polling in ideas for possible approaches. We all had great additions to the concept and were excited to start putting it to practice! At the end of class we already started learning how to operate a Raspberry Pi camera, the first step in making our ideas a reality.
2 notes
·
View notes
Text

Last week in class we started working with assembling and programming our arduino-powered cars. The task at hand was to have the car move in a square, which we semi-successfully were able to do. The issue with our car was that the wheel motors seemed to change potency at each iteration; that is, each time we placed the car down to test it the wheels moved in different speeds. This made it practically impossible for our car to move in a straight line, since each wheel moved in a different speed, inevitably causing turns. For this reason, the final result of our “square” path can be seen above—curved path followed by a change in angle and so forth. Had the wheels performed consistently such code would have yielded a square trajectory.
0 notes