Text
Testing of Lily (Good Job Lily!!!)
youtube
In Google Assistant API Library settings, a conversation will turn finished if itâs time out, no response, or there is no follow-up intent from the user. Itâs very weird to call Lily by using the wakeword whenever I want to talk to it. As a result, I re-wrote the python code to change the event type of the google assistant
Everytime I got something like:
INFO:root:ON_RESPONDING_FINSHED
INFO:root:ON_CONVERSATION_TURN_FINSHED
I wrote a custom function so I can restart the conversation.
INFO:root:ON_CONVERSATION_TURN_STARTED
(Will upload the python code onto my website later)
Yet I found out a very actual problem!
When I restart the conversation to make the assistant listen to me constantly, I will send out a lot of request to the Google Cloud Server. Because of that, I got blocked by Google (WTF). I received an Error after blocked by Google:
ârequests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url:â
Google targeted me as DDOS attacker and blocked me for few times, which is a big challenge for a developer. After I searched on the web, I realized that everyone has quotas on Google Assistant API (max. 500 requests every 24h). When someone is developing, sometimes are required a lot more. It is not just me having the same problem, but also other AIY user.
I ended up giving up this method. Even though it is the best version of my chatbot.
More Explaination will be uploaded later.
0 notes
Text
GT Exhibition Setup (In Progress)
Day 1

My dancing grass is dancing when it hears music! If you would like to see this movement as well, play music nearby the plant and be patient!
youtube
0 notes
Text
How I build my Chatbot
For this project, I used DialogFlow and integrated with webhook to build my chatbot. The advantages include: (1) Having better control of the chatbot (2) Rich GUI (3) Easy to learn (4) Webhook enabled. Will update more about how I build my agent in DialogFlow if I have time. However, I want to make the chatbot more intelligent. To do this, I tried to code my chatbot with TensorFlow (An open source machine learning library), yet the result is not ideal.
Basically, what we have to do is to feed the program with datasets to let the machine âlearnâ! Chatbot need data training to generate its own thought. I used conversation datasets for machine learning, below is a popular conversation dataset from convo that I downloaded from the web:
Even we feed it with a lot of conversation dataset ranging from movies, drama etc.. The chatbot does not seem to get smarter. Actually, the conversation is quite weird which made me so speechless...
me: Hello?bot: Hi.
me: What is your name? bot:Â My name is Chris.
me: Are you sleepy? bot: Yes, I am real.
me: Itâs 2:00am now. bot: What do you mean?
me:Â I mean it is middle of night and we should get some sleep. bot: 7.
me: What is your job? bot: I'm a gui designer.
me: Are you plan to take over the world? bot: Ye...no.
me: Yahoo bot: Hello help.
me: Repeat after me bot:Â Now I lay you down to sleep I pray the Lord your soul to keep if you should die before you wake up free that pray the Lord my soul to take.
My thought: Machine learning is a huge field and highly associated with data science, a newbie like me is hard to generate a prefect chatbot with my existing knowledge. Besides, machine learning has not yet refined nowadays, it still has a lot to be explored and discovered.
0 notes
Text
DialogFlow Agent Testing
https://bot.dialogflow.com/2c47c6a4-bc31-42fa-b1f4-b8baf71e2d8e
0 notes
Text
Leaf Curling Mechanism - Using Servo Motor & Extra Thin Fishing Line
After the fail of muscle wire, I try to curl the leaf of Lily using another mechanism.

Demo Video:Â
youtube
More Explaination will be uploaded later.
0 notes
Text
Leaf Curling Mechanism - First try with Muscle Wire (Fail!)
I bought two type of nitinol wires for making the petals/leaves of the plant Lily.
Activation Temperature is 45°C and 80°C.

The petals and leaves of Lily will curl up and down in response to the speech. In order to make the movement natural, I decided to try muscle wire rather than using servo motor to control the leaves of Lily. I was following this tutorial to test the muscle wire:
http://highlowtech.org/?p=1448
Testing Video (Shaping the muscle wire):
Use fire to burn it for few seconds to hold the shape
youtube
Demo Video (I thought it was a big success...):
youtube
A BIG PROBLEM!!!
No like what have been done in the tutorial, my muscle wire would not contract back to its normal shape in room temperature, which means it keeps in the shape of the pre-defined shape.....
It also burnt as well. Muscle wire can eventually over heat.
It may due to:
(1) The diameter of my muscle wire is too thick (1mm), which makes it difficult to contract
(2) The tutorial using muscle wire with activation temperature 90°C, the effect is more dramatic than my muscle
0 notes
Text
To Do List (Updating)
Link to my To Do List :
https://docs.google.com/document/d/1u0ZKcVudq2oHq1EgfQL4FyyVevvYutviTSgCOO327jo/edit?usp=sharing
0 notes
Text
List of Materials (Updating)
Link to my list of materials:
https://docs.google.com/document/d/1IkY7DAey10QTnbhcrdUrr107JSaoikRo8dajXlBnWh0/edit?usp=sharing
0 notes
Text
Difficulties and Problems when testing AIY Projects
In the middle of April, I tested out the all the basic functions of AIY Voice Kit, now I am able to ask questions and issue voice commands to my programs. However, as a newbie to the linux, python and raspberry pi environment, I encountered quite a lot of problems while testing the AIY Kits. Besides, there are limitation with the AIY Kit, I figured out some ways to solve the problems.
With my very basic knowledge, I often struggled at some points with my Voice Kit. At the early stage, I encountered problem with the audio and cannot run the google assistant grpc demo. I fixed the audio problem by adjusting volume in Alsamixer. For the assistant grpc demo, the error message shows: âKeyError: 'unsupported language_code'â. I searched this error on AIY Project Forum, found out that quite a lot of people encountered the same problem. What we have in common is our locale language are not English. Since Google only supports a limited numbers of languages, I then changed the language to one of the supported languages and change my country. It works like a magic.
Sound Testing in AlsaMixer
Error Message

After these problems are solved, I moved on to the next stage: I began to talk to the Google Assistant. Instead of talking to a person, the google assistant really sounds like an assistant, it called herself as Google Assistant sometimes:
Testing Video #4:
youtube
She asks me to give permission and tell me to open the Google Home app on my phone.
I tried to fix the problem by using IFTTT to create custom command and response. IFTTT is a separate company from Google, it provides a software platform that connects apps, devices and services from different developers in order to trigger one or more automations involving those apps, devices and services.
Demo Video
youtube
I created an IFTTT applet, and then use webhook to do HTTP callbacks, invoke Google Assistant to do custom responses, but I realized it is not a solution to all answers from Google Assistant, and it takes more reaction time from the cloud platform. Later on, more problems were found: Google Assistant services do not support continued conversation outside US, many services are not available in Hong Kong, but many Google services are bundled together. Besides, when I am trying to share my feelings to the voice kit, it always give me the same response: Sorry, I donât know how to help.
To solve this problem, I will try to work with Google Dialogflow to create an agent (also known as chatbot) to train an AI chatbot that is able to talk to and listen to their user, for a more natural conversation.
Except the continued conversation is not available, google also does not support custom hotword detection, I use a customizable hotword detection engine SNOWBOY to generate custom hotword for my Voice Kit using its API, demo video is in the article: Documentation of work process.
More about SnowBoy:
https://github.com/Kitt-AI/snowboy
Another difficulty is that the instructions given in magiPi magazine (comes together with the AIY Voice Kit) and AIY Project offical website is outdated and not suitable to my V1 Project (The updated version is V2) The update speed of AIY Project is very fast since it is a new product and platform. Therefore, I have problems importing the modules, but luckily I can find some solutions on the AIY forums and Github sometimes, so it is not a very big problem.
My current plan is to build a chatbot for my project. At this point, I am quite confused with the environment because I am not familiar with passing data using JSON etc. But I will try to learn about it.
0 notes
Text
Seminar About AIY Kits

Since I will be using the Google AIY Kits for my project, my advisor Ryan suggested me to attend the seminar that held in CityU on 30 March.
 It has a brief introduction to the AIY Vision and Voice Kit, how to assemble it, technical description etc. During the seminar, the speaker from Google shared some inspiring ideas about how people build fun and playful hands-on projects with AIY Kits, he also mentioned about the possible use of AIY Kit in industry level, such as sorting cucumbers etc.
What I found very new to me is the introduction of Google TensorFlow Model. It is a machine learning model for AI vision application. If people want to train their own machine learning model, they can take many photos of an object (that they want the machine to identify) from different angles and create a frozen diagram on TensorFlow. Next, make use of the compute graph and input graph into vision kit to perform object recognition. I also came up with an idea using the AIY Vision Kit, even it is nothing to do with my FYP: Developing an app for people with skin diseases or people who wants to know more about their skin condition. I can train the TensorFlow model to see different images of Skin diseases patient, such that it recognizes the symptom of specific skin diseases. People simply just need to take photos of their skin, uploaded to the app and the app would tell them what skin problems they probably have. Therefore, people no longer need to be confused by the absurd searching results on search engine, such as âFart can lead to deathâ etc. They can get some basic medical consultation from the app.
OK back to my FYP, there are some techniques I found very useful, the speaker introduced âCoding with Chromeâ and âGoogle Machine Learning Crash Courseâ (Sorry for the bad quality photos). âCoding with Chromeâ is basically an easy UI for people who is not familiar with the programming environment. It is like the programming environment Scratch, in which you can drag some pre-set command instead of coding, I will try this out when I moved to the difficult coding parts.
Secondly, the Google Machine Learning Crash Course helps people to train their model in TensorFlow, I will also check this out because I am quite confused when the speaker explained the TensorFlow Model during the seminar.
 Example project using the AIY Vision Kit:
It is a lamp that searching faces to interact with. It employed the facial recognition system from AIY Vision Kit so as to analyze the emotion of people from their facial expression.
Kinda remains me of this arduino project:
vimeo
Explanation on the Google TensorFlow Model:
Seminar Poster:

0 notes
Text
Documentation of Work Process

Assembling of AIY Voice Kit:
youtube
Testing with my AIY Voice Kit:
youtube
youtube
While I was testing with the voice kit, I found out that the answers are weird sometimes, it sounds really like a Google Assistant. I will try to fix this problem later on.
Some problems was found out and needed to be fixed:


Since the Google Assistant API does not support custom wake word, so I used a customizable hotword detection engine âSnowboyâ to generate a wake word to trigger my google assistant.
I choose âLilyâ as my custom wake word.
Custom hot-word testing:
youtube
0 notes
Text
Artist Statement
Growing a plant is actually a highly-interactive activity that is rewarding and creative. Plants are more energetic than we thought it could be. They are just very slow âanimalsâ. What if plant is also like a person who could communicate and respond to us when they are considered to be conscious?
People who live in the city often bear huge pressure. However, it is difficult to find a listener who can offer a shoulder for people to cry on, and then to top it all they have no friend to talk to. I think plant isa prefect listener and tree-hole which is easy to talk to and listens to you no matter who you are.
âCheer Up Lilyâ is an interactive object installation. It is a plant that interact and recognize userâs action and voice. When the user approach the plant, it asks and invites the user to talk to it. The emotion and the action of the plant depends on the interaction between the user and the plant. It can grow better and trigger special interaction with some verbal encouragement. You can cheer up the plant by telling it your happiness, or making it down by swearing, sharing sadness.
0 notes
Text
Exhibition Space
Exhibition Location:Â
Singing Waves Gallery L3, CMC
Approximately Size:Â
2m x 2m
Floor Plan:
Field Investigation:


After I investigated the exhibition location, there was some changes that I think may need for setting up my installation.
The area that I was assigned is a corner with slanted wall sides. I may need to do some projection mapping to ensure the correct projection on the wall side.Â
For the table projection, I will try to place the projector on the top of the nearby walls (Point 1 & 2 in the image) using the following methods. A mirror may needed to adjust the projecting direction. Another method is to pile up white cubes and place the projector on top.
If both of the methods do not produce ideal effect, my back-up plan is to change the white table to glass table to do table projection. The projector will be mounted upside down under the table and do projection on the glass as a last resort.
1 note
¡
View note