Don't wanna be here? Send us removal request.
Text
LAB 5 Documentation
Building on our in-class lesson on using Pandas to analyze datasets, I decided to go further and analyze a dataset I chose which contains the most popular weed strains and their characteristics. For the record, I don’t smoke weed (asthma), but I really enjoy growing plants and find the whole process interesting.
Link to dataset: https://www.kaggle.com/datasets/gthrosa/leafly-cannabis-strains-metadata
Link to code: https://colab.research.google.com/drive/10gvl0ovVaySMVzCkfPzKZonJg0uYhGsH?usp=sharing
I began by opening the dataset and running a few commands to better understand what I was looking at.
opening:
describe()
I was particularly interested in better understanding the specific Terpenes (natural compounds that distinguish the individual strains) and their effects on the overall strength of the strain. I pulled the most common Terpenes, as seen below:
From here, I was mainly interested in figuring out trends between which Terepene had the highest THC concentrations, a metric of how strong the strain is on the user. This was initially difficult, because, as I found out, the THC percentages were stored as strings and not integers. This made for ranking the concentrations inaccurate.
I know this is incorrect because in the initial dataset (first picture) there was a THC concentration of 20%. I had to figure out how to convert the percentage column into an integer by removing the % sign.
This was the final result, where the ‘thc_level’ column was turned into an integer, and I was able to see that the highest percentage concentrations. With more technical know-how, I would have liked to pull averages for each terpene and have them ranked from highest to lowest concentrations.
0 notes
Text
BTE Lab 4 Documentation
Intro-
I began this lab skeptical, citing my lack of python (and software in general) experience as proof that the only way we’d finish would be to choose the easiest task. However, my partner refused to be daunted and insisted we push ourselves, certain that we’d figure it out. He was absolutely correct, as we ended up taking on and completing all three of the projects laid out.
🤫Promote a Quiet Environment-
We began with the first and least difficult rated project, using the Microbit’s built-in microphone to measure nearby sound levels and issue a scrolling led text warning when the noise became too loud. We familiarized ourselves with the microphone.sound_level() variable, which monitored the sound level with a number between 0-255. The code itself was pretty simple and took us only a few iterations to work out:
message1 = "Please be quiet "
message2 = "Shutup so help me god"
if microphone.sound_level() >= 150:
display.scroll(message2)
elif microphone.sound_level() >=100:
display.scroll(message1)
sleep(1000)
We began by defining two messages as variables, stored as message1 and message2; The idea was that the first (and nicer) message would be issued as volume increased to a certain threshold, giving the user an opportunity to quiet down. However, if they chose to keep making noise, increasing the volume to the second threshold (150), the second and more harsh message would be issued. This was accomplished by using an if statement tied to the second volume level (150), followed by an else statement that would trigger the first message if the first volume threshold was reached. A sleep statement set at 1000ms was added to the end to ensure the program would have time to monitor changing volume levels before issuing a new message. The code proved to work, however, it was difficult to test since the volume numbers (0-255) were incredibly difficult to differentiate between. Furthermore, displaying quickly scrolling small text on a dimly lit LED grid is probably not the best way to alert a loud person to check their volume. Future designs would use a more noticeable alert.
🕺🏽Encourage Good Posture-
We then proceeded to the posture case problem, tasked with using the Microbit’s accelerometer to monitor the angle of someone’s back and alert them via the speaker if they began to slouch.
My partner and I knew that the first step we had to take was to understand which axis of acceleration (x,y,z) outputted by the accelerometer we should focus on. We used a simple print statement to output the live values of the accelerometer into the console, tilting the Microbit until we could understand which direction corresponded to which axis. The z-axis seemed to be the most practical, mainly due to the ergonomic positioning of the USB cord coming from the Microbit when taped to the user’s back. Similar to the first problem case, the code proved to be easier than we thought, as shown below:
import speech
set_volume(255)
while True:
z_strength = (accelerometer.get_z())
if z_strength >= 200:
speech.say('Lean forward')
sleep(1500)
elif z_strength <= -200:
speech.say('Sit up')
sleep(1500)
sleep(300)
A new development from the last experiment was using the Microbit’s ability to replicate speech via its small speaker, a feature we decided to incorporate into the warning. We settled on +200 and -200 as values for what we determined to be “bad posture,” with +200 relating to leaning backward and -200 leaning forward. If the variable z_strength (output of z-axis component of accelerometer) was outside the 400 unit range deemed “good posture,” one of the corresponding messages would be spoken through the speaker. All this code was bundled into a while True loop, allowing us to run it constantly, with sleep statements after each verbal warning to ensure time for correction and prevent a nonstop overlap of spoken warnings.
The code itself worked and did exactly as we hoped; however, the one issue we had was that the spoken warning was nearly unintelligible, even on the highest volume setting (255).
youtube
👁️Stay Attentive-
With time running out, we decided to try a hurried attempt at designing a solution for the third problem. The task seemed easy enough: we just had to build some sort of visual countdown timer that could be reset with the push of a button, or music would be played. We decided first to build a semi-hardcoded version and, if we had time, update it to be more complex. (we didn’t have time)
Regardless, our hardcoded version did solve the problem, albeit in a not very glamorous or technically involved way. But maybe there’s a lesson here in not overcomplicating things?
import music
set_volume(255)
time = 3
sleep(1000)
if time <=0:
time = 0
music.play(music.FUNERAL)
display.show(time)
time -= 1
if button_a.was_pressed():
time = 3
We imported the music library, set the speaker volume to max, and then created a variable called “time,” which we set to 3. This variable would be decreased over 1-second intervals, acting as our clock and displaying its value on the LED grid. The core meat of the code follows. Using an if statement (if time <=0:), we could trigger the music if the user didn’t press the button and reset the clock. If time didn’t equal 0, the clock would subtract 1 from its value every 1000ms until it hit 0. If the button were pressed, the time variable would automatically be set back to 3, and the cycle would repeat.
youtube
Final Thoughts:
While we arguably could have dived deeper to write more efficient code, this lab was ultimately a success for me since it both increased my confidence in software as well as reignited my interest in exploring non-hardware tech.
0 notes
Text
BTE Lab 3 Documentation
The objective for this lab was two-fold: First, we were tasked with using physical hardware to create a half-adder before then using a simulation to create a series of logic gates, ultimately culminating in a full-adder. Through both these tasks, the goal was to experiment with encapsulation by combining our knowledge of individual logic gates to create more complex functions.
Part 1: Physical Half-Adder
My partner and I began by building the physical half-adder using a combination of AND and XOR gate chips. Referencing the datasheets and utilizing our understanding of the basic circuitry behind them, we could visualize the unencapsulated component layout and circuitry.
We then tested each component individually to rule out the possibility of a broken component creating a misleading false negative. The actual assembly of the circuit went quickly, and it only took a few different iterations, marked by changing LED leg positions and the orientation of the chips, to get the desired result.
Pictured above is our final circuit. While it may not be the prettiest (note the 220ohm resistor being stretched precariously across the 74HC08AP chip), the LEDs lit up when the buttons were pressed. However, we needed to verify that the half-adder was actually computing correct outputs, so we checked them against this logic table:
Thankfully for us, they did:
youtube
Part 2: Virtual Logic Gates
With the first part of the lab completed in class, I was then tasked with installing Digital Logic Sim (among a whole suite of other packages to get it running) and assembling a series of logic gates.
These logic gates are shown below:
By starting with a simple NAND gate, I was able to slowly encapsulate my gate circuits and insert them into more complex functions. This ended with a full-adder that was comprised of two half-adders, which were, in turn, comprised of even more encapsulated circuits I had made prior.
While I generally prefer to work with hardware, taking joy in the physical/real-world use of my hands, the logic simulator proved to be helpful in better understanding how to encapsulate gates. The main reason for this was that I could divert 100% of my attention to the inputs and outputs without worrying about whether a chip was using the wrong pin or an LED was switched around. While it is fun to worry about those things, the logic simulator was ultimately a better-suited, more time-efficient method for better understanding gates.
However, that’s not to say I didn’t run into issues with using the software. Software has its own bugs and kinks unique to working on a computer, such as requiring extra programs to be installed and navigating poorly documented websites to find a download. While the actual use of the software once it was working proved to be an easier environment to experiment in, using hardware was (at least for me) easier to get up and running quickly. Perhaps most importantly, I felt I had control over debugging hardware issues. If something didn’t work, I could easily isolate the problem and exchange a component. However, with the software, when it decided not to open, there was nothing I could do other than hope there was a different version to download.
0 notes
Text
BTE Lab 2 Documentation
From the start, I felt much more confident in this lab compared to the previous, mainly because I grew up messing around with hardware, whereas software was relatively foreign to me. My partner also had similar childhood experiences working with hardware, so we jumped right into the project with a base level of knowledge.
After first testing all the components to ensure they were functioning individually, we began assembling the circuits on the breadboard. The simple LED circuit took a few seconds, and we quickly progressed to adding in the tactile button.
With these initial circuits under our belt, we began the task of integrating the 2n2222 to-92 transistor. This was 100% our fault, but since we had not read the full instructions posted on the lab, we spent a good bit of time trying to diagnose the correct pin wiring for the transistor, multi metering and rewiring every possible combination. Not satisfied with the way this was going, we pulled up the datasheet and found the wiring diagram from there.
At this point, we then racked our memory, trying to remember how to read the OHM ratings of resistors, and began assembling the rest of the circuit.
Perhaps a combination of our earlier experience with hardware and pure luck (definitely more of the latter), our circuit worked with only minimal diagnosing. The few issues we ran into were attributed to the LEDs being wired in reverse and not correctly jumping the power rails of the breadboard. These proved to be quick fixes, and we successfully got the circuit to work!
youtube
0 notes
Text
BTE Microbit Blog Post 1
Microbit Name Tag Project
Objective: We set out to create a digital nametag programmed with an icebreaker game for a school social event. The nametag would have a number displayed in binary via the Microbit's LED grid, and it would be up to the user to decode the number and enter it through the buttons. With this
Process: After plugging in our Microbit and opening the IDE, we quickly realized that we were completely confused by the actual prompt and didn’t understand what was being asked of us. Eager to begin coding and not wanting to speculate any longer, we opted to jump in and start programming what we thought was the solution rather than take additional time and better understand the task:
Our first attempts at understanding how to program the LED
Eventually, we realized that we had become distracted and were just experimenting with making the microbit do other weird things instead of solving the actual problem. We reread the prompt together, asked our partners for assistance in understanding the end goal, and started for real, this time programming the icebreaker game. I had never used Python before, so this experimentation stage of messing around actually really helped me understand the language.
Once we understood the prompt, the actual coding itself proved to be relatively easy. We began with implementing the most basic and essential features: the storage of our name and the secret number as variables, a variable that registered the number of times button A was pressed, and a way to display the current number of button presses. We would then input the secret number on the backend and manually control the pixels to show that number. The piste de resistance was having the microbit confirm whether the number of button presses (buttonCount) was equal to the secret number (secretNum) when button B was pressed.
Further Development: While we were able to develop a very barebones version of the icebreaker game, there’s still more we could do to push it over the top. Ideally, the user wouldn’t have to manually program the led to display the binary translation of the secret number. Instead, they could “set it” on the microbit itself, as opposed to doing it in the IDE and having to reflash each time you changed it.
CODE
from microbit import * import music
name = 'DylWill' secretNum = 6 buttonCount = 0
display.set_pixel(2,0,9) display.set_pixel(4,0,9)
while True: if button_a.was_pressed(): buttonCount += 1 display.clear() display.show(buttonCount) sleep(2000) display.clear() display.set_pixel(2,0,9) display.set_pixel(4,0,9)
if button_b.was_pressed(): if buttonCount==secretNum: music.play(music.BA_DING)
display.show(name) else: music.play(music.FUNERAL) buttonCount = 0 display.show(buttonCount) sleep(2000) display.clear() display.set_pixel(2,0,9) display.set_pixel(4,0,9)
0 notes
Text


8:30 pm, project due tomorrow, and I’m going to Home Depot to get more wood to make a box
0 notes
Text
Audio options
I finished the last small details of my buildings and researched song options
0 notes
Video
tumblr
I’ve spent the last few classes manipulating the video to be stereoscopic and then getting the glasses to be the right distance (this video is 100mm)
0 notes
Video
tumblr
This is my work in progress for my project. I’ve spent time adjusting the rendering styles, as I talked about with Melissa last class. (2.3.21) Here is an example of a watercolor type filter I was able to place the Sketchup buildings in.
0 notes
Text
Jan 31st update
Finished the second of 3 buildings I’m working on. Began 3rd. Remaining classes will be dedicated to animating the scenes
0 notes