bdawkins8
bdawkins8
Untitled
25 posts
Don't wanna be here? Send us removal request.
bdawkins8 · 7 months ago
Text
Blog Post 24
Artifact: https://c-istudios.com/the-future-of-reality-tv-trends-and-innovations-in-the-industry/
As we get closer to the end of the semester, I have narrowed down the topics that I want to focus on for my final project. I will continue my research into the quickly evolving field of BCI technology, while also relating it to media and the potential impacts it could have on television and streaming. The particular avenue of media that I want to explore is reality TV and how BCIs could transform the reality TV industry.
I want to start by highlighting the current trends in reality TV and the landscape of reality TV at the moment. The article attached discusses the future trends and innovations in reality TV and how technology is already impacting it.
With streaming services pushing out reality TV shows at an extreme rate, there is a need for new concepts to be explored. Boundaries are being pushed by producers to capture and hold their viewers' attention, with exciting challenges, stunts, and mind games being explored with human subjects for the benefit of the viewer.
"The future of reality TV also promises to bring more daring concepts and unique storylines. Producers are now pushing the boundaries, exploring controversial topics and creating shows that can capture viewers’ imaginations," according to the article.
As technology continues to intertwine with shows like "The Circle"—a show that follows contestants living in an apartment complex while competing for a prize—and "L.O.VE. AI", a Japanese reality TV dating show that pairs people up with their "perfect match" based on AI, we can look forward to what even more advanced technology can bring reality TV watchers. This advanced technology is already being used in well-known shows like "The Voice," which utilizes voice recognition technology during auditions and performances.
"The increasing willingness of producers to explore daring concepts is creating a vibrant landscape for reality TV that viewers can look forward to in the future," according to the article.
The search for new and exciting concepts prompts me to think that BCIs aren't far behind completely intertwining with the reality TV industry. In other words, for TV shows that can afford it, I think that BCI use will be implemented, in some way, with real people who are either competing, dating, or solving a collective problem.
Those with ethical concerns about public footage of the inner workings of a contestants brain may raise an eyebrow at this thought, however, let's remember that a lot of reality TV already pushes the boundaries of ethics and morality, with little guardrails in place once a person signs on to do whatever they may be doing in that show.
"We can expect to see this trend continue as producers look for new ways to engage viewers. Technology will provide producers with the tools to create even more creative and groundbreaking concepts that will push the boundaries of reality TV," according to the article.
I'm really excited to explore these topics, and go even further with this research in my final paper.
0 notes
bdawkins8 · 7 months ago
Text
Blog Post 23
Artifact: https://techround.co.uk/tech/ai-affecting-reality-tv-industry/
Before going even further into the current state of BCIs, I want to highlight the history of BCIs, which dates all the way back to the 1920s. I also want to showcase a technology that shocked me.
As a reminder, a brain computer interface is essentially a pathway of communication that links the electrical activity in the brain to an outside or external device that then reads those signals.
In the 1920s, a German scientist by the name of Hans Berger proved that there are electrical currents in the brain. These currents can be measured using what we know as an EEG, or electroencephalography.
In the 60s and 70s, BCIs were tested and researched on animal brains, however, this proved to be a tumultuous and frustrating task. In 1973, the term 'brain computer interface' was established in scientific literature, the name coined by Jacques Vidal.
As time went on, researchers and scientists continued to tinker with this technology, experimenting and documenting their findings. As you can imagine, the technology advanced pretty quickly to get where we are today.
"From there, signal processing algorithms continued to advance, refined and classified by scientists who were determined to make more reliable and accurate BCIs. Because they knew that once the communication channels were effective, the potential use cases for brain-computer interfaces could be life-changing," according to the article.
This article also alleges that due to emerging technologies, like AI, will push BCI development even further.
Neurable, which was founded in 2015, is a neurotechnology company that aims to "create seamless interactions between humans and machines," according to their website.
The 'about' page on their website states the following: "We envision a future where brain-computer interfaces are as ubiquitous as smartphones, empowering individuals to achieve unprecedented levels of productivity, creativity, and well-being. We aim to enable a richer, more personalized dialogue between humans and technology, enhancing every experience with the full spectrum of human intention."
Their claim to fame product, the MW75 Neuro, looks like a normal pair of headphones...except the fact that it tracks and records brain signals that occur in the user's brain using EEG technology.
Here is how it works, according to the website:
Each of the billions of neurons in your brain produces electrical signals.
These signals are recorded using electroencephalography (EEG) through MW75 Neuro’s soft fabric neural sensors
The data is processed and then interpreted by artificial intelligence (AI) to determine your focus level, prompt you when it’s time to take a break, and more.
You can access these metrics in the Neurable app. Gain insights and suggestions that help you work smarter.
I look forward to researching this technology, and more like it, even further.
0 notes
bdawkins8 · 7 months ago
Text
Blog Post 22
Artifact: https://venturebeat.com/games/inner-cosmos-shows-its-mildly-invasive-skull-implants-can-treat-depression/
In terms of BCIs for mental health treatment, there is a lot of promise. For those who have treatment resistant depression, who consistently find dead ends in medication and typical treatment options for their depression, the future of BCIs for mental health is hopeful.
Inner Cosmos, a neurotechnology company that implants devices in human brains, state that their technology can be used to treat depression, specifically treatment resistant depression. In April of 2024, VentureBeat published an article on this, noting that the initial phases of Inner Cosmos' human trials have been promising.
"In particular, Inner Cosmos is going after treatment-resistant patients, who have failed two courses of antidepressants," according to the article. The CEO, Meron Gribetz, said in an interview that the company's "roadmap" is starting with treatment resistant depression, with plans to to then go to major depressive disorders.
This device is labeled as "mildly invasive" and is said to become level with the layer of skin on ones head. "This device, a small chip implanted in the skull, aims to monitor and stimulate the brain to alleviate symptoms of depression. He said it is a very small implant, in the form of a small disk, that is embedded in the outer layer of the bone of your skull, right under the skin," (Takahashi, 2024). Once implanted, the device has the ability to then transmit collected data to your doctor.
Inner Cosmos teamed up with Darin Dougherty to create a small disk that is said to be less invasive than a cochlear implant, said Gribetz in an interview. "Over the course of 21 months, the study has demonstrated the safety and feasibility of the Inner Cosmos device, with no serious adverse effects reported among participants," according to the article. The surgery is compared to LASIK, with the procedure said to take just 30 minutes.
When compared to Musk's Neuralink, which uses highly invasive techniques—to target people with paralysis, in particular—Inner Cosmos has created a less invasive method while also appealing to a much wider market: mental health care.
Although these trials are still in their infancy and necessitate further human testing, there is much to be said about the future of mental health care options.
Grabitz shares his excitement, stating that he sees this succeeding for the masses one day. This will take time, though, through building on current research and addressing ethical issues. In order to make this a future for our society, though, proper technologies must be accessible and desirable to people: "You have to get humanity ready for BCI by doing less invasive surgeries," said Grabitz. Inner Cosmos is doing just that.
0 notes
bdawkins8 · 7 months ago
Text
Blog Post 21
Artifact: https://www.codewave.ca/biohacking-your-mood-can-brain-computer-interfaces-beat-depression-for-good/
As we get closer to the end of the semester and the final project for this class looms overhead, I want to really concentrate these last few blog posts on how brain computer interfaces can revolutionize mental health care. While my previous posts have focused more on physical disabilities in relation to brain computer interfaces, I want to pivot towards how people struggling with depression, anxiety, and mood disorders could benefit from such a technology.
In early 2024, Codewave published an article titled "Biohacking Your Mood: Can Brain Computer Interfaces Beat Depression For Good?", and while I think this is a pretty bad title, I think that a lot of great points are made within the piece.
Brain computer interfaces are particularly hopeful when it comes to assisting both patients and doctors in better understanding and managing symptoms. Through the use of BCIs, "medical professionals can learn more about the internal workings of mental health disorders and how they respond to therapy," according to the article.
BCIs in the mental health field have a lot of potential for not only doctors, but patients of depression, anxiety, PTSD, etc. For many people, it would be incredibly helpful information to know, in real time, when an emotional response is triggered.
"By providing real-time feedback on a person’s emotions and reactions, a BCI can help them learn to recognize and control their reactions to stressful situations. This can help people better manage their mental health and reduce the likelihood of a flare-up," according to the article.
On the other side of it, this kind of detailed tracking can help doctors prescribe treatment that works for that specific patient, establishing highly individualized care.
There is a lot of promise in this field, however, ongoing trials and preliminary findings have shown that this is possible. For example, a University of Pennsylvania study "discovered that individuals receiving BCI treatment exhibited appreciable gains in their general mental health and mood," according to the article. In addition, the article notes that "patients with depression have seen a 60% improvement in their condition after receiving brain-computer interface treatment, which involves the use of a neuromodulation device to stimulate nerves and alleviate symptoms."
That being said, I would be curious how this works with patients who have treatment resistant depression (TRD). How would TRD be impacted at by the findings? How could doctors use BCIs for patients with TRD? Many questions remain, but I will get into that in my next post.
0 notes
bdawkins8 · 7 months ago
Text
Blog Post 20
Artifact: https://www.startus-insights.com/innovators-guide/whats-currently-happening-in-brain-computer-interfaces/
This article details a thorough analysis of insights into brain computer interfaces and their current place in society. This piece was published by StartUS Insights, particularly focusing on the second quarter of 2024 (Q2/2024). StartUs Insights explores highlights of Q2/2024, explaining how and where the data is from, the current landscape of BCIs, milestones and market dynamics, economic and ethical impacts, navigating the future, and applications beyond medicine.
Key highlights from Q2/2024 in the field of brain computer interfaces include the fact that Neuralink, the BCI implant created and founded by Elon Musk, is leading the charge in human BCI applications. While this is true, a rival company—Synchron—is said to be stepping up production and "strengthening its manufacturing skills, highlighting the competitive and dynamic character of the industry," according to the article.
Regarding the market itself, it's important to consider everything that goes into the production and distribution of this technology; expertise in both technology and the network of regulations surrounding this kind of work is necessary for success. A step in the right direction has been made, with ONWARD Medical receiving FDA Breakthrough Device Designation for spinal cord injuries in particular. While this technology isn't relating to BCIs specifically, it's hopeful: Breakthrough Device Designation essentially is "designed to help patients and their physicians receive timely access to technologies that have the potential to provide more effective treatment or diagnosis for debilitating conditions of significant unmet need," according to Med-Tech Innovation News.
As discussed in previous blog posts, there are of course ethical considerations for this technology. In fact, China has produced ethical guidelines specifically for brain computer interface research. This analysis published by StartUs begs the question: "Are we prepared for the effects of merging our minds with those of machines?"
This piece also briefly mentions the additional applications that BCIs can have, introducing gaming and space exploration as potentially rich areas of advancement. While I discussed BCIs in relation to gaming in a previous post, the point of space exploration is especially interesting to me. "BCI technology may improve space exploration—perhaps even leading to interplanetary communication," according to the article.
The last point from the article that I will discuss is navigating the future of BCIs, and the implications for society as a whole. Brain computer interface technology is making strides towards making devices less intrusive, which is crucial for accessibility of such a technology. In addition, innovators in this field must keep ethical considerations at the forefront of their mind when developing and carrying out these trials.
I'll end on a significant quote from the article, one that encompasses what I anticipate to be the focus of my final individual project in this class: "Incorporating BCIs into daily life might rewrite the definition of what it is to be human."
0 notes
bdawkins8 · 7 months ago
Text
Blog Post 19
Artifact: https://health.ucdavis.edu/news/headlines/new-brain-computer-interface-allows-man-with-als-to-speak-again/2024/08
This article, published by UC Davis Health in August of 2024, details the experience of a man, Casey, with ALS (amyotrophic lateral sclerosis) and how a brain computer interface helped him to communicate. And not only communicate, but do it with up to 97% accuracy.
While this BCI was certainly an invasive one, with sensors implanted in Casey's brain, the technology is life changing. Prior to the BCI, Casey could not communicate with his words and his speech was severely impaired. Within minutes of activating this system, though, he was able to communicate the words he was thinking into text that is read aloud from a computer.
Prior to the application of the system, Casey had four microelectrode arrays placed into the left precentral gyrus—a brain region responsible for coordinating speech—in July of 2023. The arrays then record the brain activity from 256 cortical electrodes.
The article does mention previous attempts and research into this sector: "Despite recent advances in BCI technology, efforts to enable communication have been slow and prone to errors. This is because the machine-learning programs that interpreted brain signals required a large amount of time and data to perform," (Yehya, 2024).
In total, there were 84 data collection sessions with Casey, spanning 32 weeks. It is recorded that Casey used the speech BCI to conversate both in person and over video for over 248 hours.
Through the duration of these data collection sessions, the system got better and better at accurately collecting and reciting Casey's words. "At the first speech data training session, the system took 30 minutes to achieve 99.6% word accuracy with a 50-word vocabulary," according to the article. "In the second session, the size of the potential vocabulary increased to 125,000 words. With just an additional 1.4 hours of training data, the BCI achieved a 90.2% word accuracy with this greatly expanded vocabulary. After continued data collection, the BCI has maintained 97.5% accuracy," (Yehya, 2024).
The end of the article details direct quotes from the team that carried this out as well as from Casey himself. He said that not being able to communicate is demoralizing and that technology like this will help people get back into life and society.
While this technology—brain computer interfaces—isn't new, this breakthrough in accuracy for BCIs is. I anticipate that BCIs will become a huge part of the medical sector soon, hopefully with accessible and widespread use among those who would benefit most.
0 notes
bdawkins8 · 8 months ago
Text
Blog Post 18
Artifact: https://www.weforum.org/stories/2024/06/the-brain-computer-interface-market-is-growing-but-what-are-the-risks/
This artifact explores the many uses and implications of brain computer interfaces. Published in June of 2024 by the World Economic Forum, it highlights the growing economic impact of BCIs as well as the inherent security risks that come with them.
The United States is, notably, leading advancements in the BCI market, with a worth of $1.74 billion in 2022 and expectations for it to grow to over $6 billion by 2030, according to the article. There has been significant funding in the US for research and development into this ecosystem of technology.
I found it interesting that brain computer interface use is not limited to medical setting, and instead span a wide variety of real-world applications. The article note five primary application areas, including, of course, medical as well as research, mental wellness, multi-industry, and gaming & entertainment.
I want to include a diagram provided by the World Economic Forum in this article. I think it does a relatively good job of simplifying the process behind BCIs.
Tumblr media
The main components of this process include brain signal acquisition, signal processing, pattern recognition and machine learning, and sensory feedback.
With this, of course, come serious risks. Brain tapping, for example, compromises an individual't confidentiality. It involves intercepting signals transmitted from the brain during the signal acquisition phase. Somewhat similar is another risk: misleading stimuli attack, which also happens during the signal acquisition phase; this can manipulate or bias the outcome. "Misleading stimuli can also be used during feedback to control an individual’s mind. The ability of a BCI application to stimulate the brain introduces a significant risk of hijacking, potentially compelling individuals to engage in actions contrary to their will," (Alohaly, 2024). The third risk noted in the article is relating to the machine learning component. This attack would manipulate training or testing examples, leading to skewed results, according to the article.
Due to the reality of these risks, there are several safeguards that must be in place. The article notes four: transparency & consent, regulatory oversight, enhanced security measures, and public awareness & education.
With the growing popularity of such technologies, it's important that education and awareness of BCI use is implemented earlier rather than later. This article was a great start to that education.
0 notes
bdawkins8 · 8 months ago
Text
Blog Post 17
Artifact: https://engineering.cmu.edu/news-events/news/2024/04/30-noninvasive-bci.html
After learning about brain-computer interfaces in my previous blog post, I decided to explore more deeply into the realm of BCIs.
Earlier this year, in late April of 2024, Carnegie Mellon University College of Engineering published an article detailing their research on noninvasive brain-controlled interfaces. This is especially groundbreaking because BCIs are typically invasive. For example, Neuralink—founded by Elon Musk—is focused on giving quadriplegic people the ability to control their computers with their brain. The catch, though, is that you have to have a chip implanted into your brain. Not only is this invasive, but it also raises concerns about access and equity of the product. As opposed to invasive counterparts, noninvasive BCIs provide "increased safety, cost-effectiveness, and an ability to be used by numerous patients, as well as the general population," according to the article. The downsides that come with noninvasive BCIs relate to decreased accuracy and difficulty with interpretation of results.
Carnegie Mellon is a leading researcher in the field of noninvasive BCIs; in 2019, they proved that a mind-controlled robotic arm had the ability to continuously track and follow a computer cursor.
Using an AI-powered deep learning approach, the team of researchers at Carnegie Mellon successfully recorded the capability of their noninvasive BCI. In the study, 28 human participants were given a complex BCI task to track an object on a screen by just thinking about it. During this task, an EEG method recorded this activity from outside the brain. "Using AI to train a deep neural network, the He group then directly decoded and interpreted human intentions for continuous object movement using the BCI sensor data. Overall, the work demonstrates the excellent performance of non-invasive BCI for a brain-controlled computerized device," (Pecchia, 2024).
In addition to its application for able-bodied subjects, this technology has significant implications for those with motor impairments, such as stroke patients, people with spinal-cord injuries, and those with Parkinson's.
Thinking about the future of this technology is exciting. Although this research is still in the early stages, with much still to learn and change, it provides a beacon of hope for those that would benefit from such a technology; specifically people that wouldn't be able to otherwise afford an invasive brain implant.
0 notes
bdawkins8 · 8 months ago
Text
Blog Post 16
Artifact: https://www.accessibility.com/blog/future-trends-in-digital-accessibility-emerging-technologies-that-could-make-the-web-more-inclusive
Since this class is primarily focused on the future of media and trends, I wanted to discuss this article that I came across regarding the future of digital accessibility.
This article discusses a number of trends and how (and when) they could evolve to be more accessible.
AI and Generative AI are currently changing the online landscape for millions of users. This article discusses how GenAI and machine learning can be integrating into accessibility testing.
"These technologies can automate evaluating websites, applications, and content for accessibility issues. AI-driven accessibility tools can scan and analyze digital assets for compliance with accessibility standards, providing developers with instant feedback and suggestions for improvement," (Roussey, 2024).
Voice assistants like Amazon Alexa and Google Assistant will also likely see changes, with advancements and greater accuracy in voice recognition so that individuals with mobility and communication impairments can better use the service.
Another point of interest mentioned in this article is brain-computer interfaces (BCIs), which I had previously never heard of. The idea behind brain-computer interfaces is that there is a direct line of communication between the human brain and digital devices. "BCIs can potentially empower users with severe physical disabilities by allowing them to control computers and interact with digital content using their thoughts alone," (Roussey, 2024). This technology would be especially helpful for individuals with paralysis or other mobility impairments.
Quantum computing, which I had first heard of in my Trendspotting class, is meant to solve complex computational problems at a high speed. This could help users with disabilities in a number of ways. "In accessibility, quantum computing could speed up the development of advanced AI algorithms, enabling real-time language translation, image recognition, and natural language processing that surpasses current capabilities," (Roussey, 2024).
A lot is unclear about the future, however, we do know that the media landscape is drastically changing and will only get more intuitive. Users with disabilities should reap the benefits of advanced technology just as much as users without disabilities.
0 notes
bdawkins8 · 8 months ago
Text
Blog post 15
Artifact: https://www.shrm.org/topics-tools/employment-law-compliance/accessible-websites-disabilities
Following my previous blog posts, I want to continue on the same theme of web accessibility. This article, published in early 2024, mentions the Americans with Disabilities Act (ADA) and how it relates to businesses and companies, particularly how they present themselves online. For employees and customers with disabilities, accessible web design can either make or break an experience. It's important that all online websites and databases make their information accessible, otherwise the user will leave having had a less than positive experience.
There is a ton of information out there with tips and tricks on how to make a website more accessible, often defaulting to ways to make it easy for a screen reader to successfully help a person with a disability.
Some of the tips that this website offered were making sure images have descriptive alt tags, ensuring that a user can enlarge the content up to 200 percent, and giving ample guidance and instruction for web forms. One in particular that they suggested that I found interesting was relating to hashtags and the necessity to uppercase each letter in the word (e.g., #BreastCancerAwareness). In addition, it also noted the importance of making sure the entire website is navigable using only a keyboard, for those that cannot use a mouse pad.
I find this information particularly useful as I am currently experiencing a desire to shift my career path to web development. As a future web developer, I want to ensure that the content and structure that I create is accessible to the fullest extent possible, with measures in place to make sure that all users have a positive experience.
When making a website, it's important to understand that you are not just checking box, so to speak, when making your site accessible; this process is continual and may need to be altered, changed, or updated over time. In a constantly evolving media and information technology world, it is necessary to keep up.
0 notes
bdawkins8 · 8 months ago
Text
Blog Post 14
Artifact: https://blog.usablenet.com/banking-online-while-blind-3-best-practices-for-web-accessibility
For this post, I wanted to follow my train of thought from my previous blog post and explore inclusive web design. This particular artifact relates to insights from a real user that is blind and how he navigates online banking.
Michael Taylor, the author of this piece, focuses on online banking and the accessibility of a particular online banking platform. The main tool that he utilizes is a screen reader. His discussion of the inclusive web design, in this context, is in relation to how the screen reader reads and deduces the information based on the website design.
"For a blind technology user like myself, digital accessibility is paramount in online banking. Once I can manage my finances online, I can enjoy all the rest the Internet offers. Having control over my money is huge for my independence," (Taylor, 2024).
Taylor explores issues specific to the banking platform that he is using, including an accessibility bug that interrupted the account selection process as well as issues with dynamic address fields; these address fields typically are intuitive and load an address as you type it, but for Taylor it did not let him enter it manually.
"It only takes one significant accessibility defect to block screen reader users like myself from completing a given task," said Taylor.
Overall, he also included three best practices for improving accessibility of a banking website. This includes: 1) digital statements in PDF form are tricky for screen readers, so ensure that banks provide accessible PDF options, 2) make tab navigation smoother, and 3) improve menu navigation so that account and settings are easily accessible.
Of course online banking accessibility is only one realm of digital literacy that should be improved for the visually impaired. This is just one real world example of how a blind person may struggle with a task that non-blind people take for granted every day.
Moving forward, I plan to think of how user accessibility, specifically in digital environments, will evolve. I also remain interested in how GPS systems cater to those that are disabled.
0 notes
bdawkins8 · 8 months ago
Text
Blog Post 13
Artifact: https://www.loc.gov/nls/services-and-resources/informational-publications/gps-and-wayfinding-apps/
This is a reference guide that I came across and it compiles a list of GPS and Wayfinding apps, specifically for the "blind and print disabled" according to the National Library Service.
While people with visual disabilities can utilize a number of non-technological tools—like a white cane and service animals, for instance—there are also a number of technological tools that the visually impaired can use. It is particularly important to have technological tools for the visually impaired when they are in transit, going from point A to point B. When it comes to finding destinations, there is only so much a white cane or a service animal can do.
Within this document, the information is split into two categories: mobile apps and apps that work with strategically placed beacons. Not only does this webpage show the price for the applications to readers, but also what platform the app is available on (e.g., iOS).
For the mobile app section, it notes that some navigation apps are specifically designed for people with visual disabilities while others are meant for the general public but accessible for users with visual impairment.
Autour uses 3D sound to alert users of points of interest around them in real time. Once the user discovers a point of interest that they are interested in, they can select it. Autour is free and available on iOS.
Soundscape provides users with visual disabilities to find desired destinations and tag them with virtual beacons. The app is designed to be used with headphones: "Users hear a bell sound when facing the beacons. When a travelers are not directed toward a beacon, they hear a tapping sound from the direction in which the beacon is located. The app also highlights points of interest and allows for users to mark locations such as bus tops."
Another app, iWalkStraight, is an orientation aid for the visually disabled. The app "informs users when they are no longer following a straight trajectory and gives them speech instructions when they stray" according to the National Library Service.
For apps that work with strategically placed beacons, they are meant to be placed ahead of time at a specific venue of interest for the user. Let's say that someone knows they will be attending a show at a music hall in a few days; the owner of that music hall will contact one of the companies that offer this service, and then place the beacon at the venue for future use for the visually impaired individual. Some companies that do this are ClickAndGo, LowViz Guide Indoor Navigation, and RightHear. These apps use strategically placed beacons in the venue to inform users of key locations and overall guidance.
While this is certainly not representative of the whole list, it exemplifies some really brilliant applications that have been designed for the visually impaired.
For those that are not visually impaired, we often take for granted how easy it is to get from one destination to another. Of course GPS apps have their moments, and we get upset or are a few minutes late to our location; but imagine how frustrating this can be for people who are visually disabled or blind, they experience even more obstacles when it comes to navigation.
Thinking back to the conversations in class about future inventions and innovations, I'd like to try and apply that thinking here. Clearly there is a need for visually impaired people to navigate locations. When thinking about other needs of the visually impaired, I also think of navigating the web and accessing data from the internet and using the computer to do daily tasks and how that may impact the visually impaired.
Screen readers are certainly one tool, however, I am curious about digital media literacy and how that may interact with visual impairment. There are a few studies devoted to this interaction.
A study done in 2020, for example, found that visually impaired individuals were most interested in sending and reading emails, online banking, online shopping, and seeking online health information—all of which were considered the most difficult activities for them to perform. The study mentions that this need "may provide opportunities for improved designs to meet the needs of visually impaired users as inclusive designs often focus on discovering a user’s needs, interest, preferences and relevance," (Okonji et al., 2020).
That being said, I'd like to explore this unmet need and attempt to anticipate some future innovations in this realm.
0 notes
bdawkins8 · 8 months ago
Text
Blog Post 12
Artifact: https://www.euronews.com/next/2022/09/22/wayfinding-app-uses-ar-3d-sound-camera-to-guide-blind-visually-impaired-people-in-cities
This article, published by Euro News in 2022, informed me on an app that I didn't even know existed.
When we think of GPS navigation, we often think of navigating a car or vehicle of sorts. A large part of GPS on mobile devices—though—are walking routes, which are just as important when it comes to GPS accuracy and efficiency.
SonarVision, based in France, has taken an under-represented group, the visually impaired, and made this community the focus of their start up. The idea behind SonarVision is that it allows individuals to "prepare journeys with the help of a human operator, then follow them independently, with unprecedented accuracy," according to the SonarVision website.
Mainstream GPS apps such as Waze and Google Maps—Apple Maps, too—have not been designed to accommodate the needs of the visually impaired or blind. Even with the addition of screen readers, these apps are still hard to use; it's also not nearly as accurate and helpful as apps designed for aiding the blind.
"SonarVision’s added value, he said, is to guide users from point A to B like a 'super-high-precision GPS' that’s also highly intuitive," (Huet, 2022).
The magic behind SonarVision is that it uses spatial sound (3D sound) to guide the person in the right direction. Essentially, if a sound is coming from the left direction, it would alert the user to turn left.
In addition to using 3D sound for precision, SonarVision uses augmented reality (AR) technology to scan buildings using the phone's camera. With this, it allows the app to accurately geo-track their users, with precision anywhere between 20 centimeters and 1 meter, according to the Co-Founder and CEO, Nathan Daix.
Although this article was published a few years ago, SonarVision remains a staple for many visually impaired people who need to navigate a busy city, like Paris.
The pair of young, French engineering students who founded this start up had a purpose in mind: to tackle issues that may not often get attention, let alone solutions. It reminded me of a main idea in our textbook—"Futuring"—in that these entrepreneurs are shaping the future for users of navigation systems that are disabled. This is, undoubtedly, an underserved area, and it deserves attention.
0 notes
bdawkins8 · 8 months ago
Text
Blog Post 11
Artifact: https://www.reuters.com/science/scientists-explain-mount-everests-anomalous-growth-2024-09-30/
While I want to stay on a similar path as my previous posts, I came across an article from Reuters, published just a few weeks age, that piqued my interest. It also happened to contribute some of the findings to the global positioning system (GPS).
Mount Everest, commonly known as the world's tallest mountain, is actually still growing. It currently towers 5.5 miles above sea level, but there is data to suggest that it will only grow taller.
"Everest has gained roughly 49-164 feet (15-50 meters) in height due to this change in the regional river system, with the Kosi river merging with the Arun river approximately 89,000 years ago, the researchers estimated. That translates to an uplift rate of roughly 0.01-0.02 inches (0.2-0.5 millimeters) per year," (Dunham, 2024).
What's happening here is isostatic rebound, and it involves the rise of land masses on earth's crust. During isostatic rebound, the weight of earth's surface diminishes—in this case, ice sheets—and land rises. One expert compares this to a boat rising in water after the cargo is removed.
Researchers have estimated that isostatic rebound accounts for about 10% of Everest's annual uplift rate.
Knowing this information, we can only assume that Everest will increase in height over time. Taking into account our textbook—"Futuring"—it is safe to accurately infer that Everest is currently at it's shortest, with new, higher heights (literally) to be gained in the future.
How is this measured, though? GPS measurements.
This is fascinating to me. The global positioning system makes note of not only traffic congestion, but ongoing geological processes as well.
0 notes
bdawkins8 · 9 months ago
Text
Blog Post 10
Artifact: https://its.berkeley.edu/news/your-navigation-app-making-traffic-unmanageable
An article published in 2019 by Jane McFarlane explores the unintended consequences of widespread use of navigation apps like Waze and Google Maps. The UC Berkeley Institute of Transportation Studies—the publisher of this article—focuses on data analytics for emerging transportation issues.
McFarlane speaks about the ways that highly-responsive and user-based navigation apps actually end up clogging certain areas, making the overall commute longer and also harming the streets and neighborhoods that they occur in.
"The apps are typically optimized to keep an individual driver’s travel time as short as possible; they don’t care whether the residential streets can absorb the traffic or whether motorists who show up in unexpected places may compromise safety," (McFarlane, 2019).
There are a number of examples cited in the article; apps like Waze and Google Maps have actually caused accidents due to increased congestion on roads or streets that cannot handle it.
For instance, a steep a narrow road in Los Angeles, originally used as a pathway for goats, was a street hat these apps were consistently rerouting drivers to, causing chaos on a road that could not handle it. While Waze and Google Maps weren't aware of the logistical issues this road would pose, it highlights the glaring issues with navigation apps that use real-time traffic patterns to reroute drivers, taking them on—literally—the road less traveled.
"The real problem is that the traffic management apps are not working with existing urban infrastructures to move the most traffic in the most efficient way," (McFarlane, 2019).
Navigation apps like Waze and Google Maps receive data streamed to its servers directly from the devices of its users. This means that the heavier use of the app will "colors the system’s understanding of reality," according to the article.
While this article was published five years ago, and surely updates have been made to systems, but glaring issue remains accurate.
An interesting point, made by our textbook, reminded me of this situation: It [scanning] can be thought of as the effort to identify and understand those phenomena or aspects of the world that are most relevant to the people or groups who need this information for important decisions," (Futuring, page 98).
If navigation app developers were to take the aspects of the specific location that are most relevant to Waze and Google Maps users (e.g., what time a school lets out, steep hills with blind drives, etc), they could increase efficiency in the routes that their apps suggest, relating to important decisions based on better information.
0 notes
bdawkins8 · 9 months ago
Text
Blog Post 9
Artifact: https://www.theverge.com/2024/7/31/24209969/google-maps-destination-guidance-waze-camera-events
To stay on the same track as previous posts, this artifact relates to an article published by The Verge in July of 2024 relating to GPS systems.
The article informs the reader on the increasingly similarities, promoted by software updates, that Google Maps has undergone; specifically in relation to Google Maps similarities to Waze.
The good thing is that no one's toes are being stepped on here: Waze and Google Maps are both owned by Google, so it was most definitely an intentional choice to bring these two navigation apps even closer to one another.
I find it interesting that I have such strong feelings about both of these navigation apps; with Waze reigning number one and Google Maps at the very bottom of that list, however long it may be. I hate Google Maps for a number of reasons...but we won't get into that right now.
One update to Google Maps noted in this article, however, piqued my interest.
"Google Maps service is also adding new destination guidance that will identify a building’s entrance as you approach it. The feature will pinpoint the exact building you’re navigating to by highlighting it in red, with a green indicator pointing to the main entrance of the building. Google will also start showing nearby parking lots," (Shakir, 2024).
An update like this is exciting for people with driving anxiety like me, who nearly go into breakdown mode when in an unfamiliar area and there is parking involved.
According to the article, Waze saw some cool updates, too: "Waze users can now report new types of traffic cameras, like those that go off if you drive in the bus or HOV lanes or those that check for seat belts and whether you’re texting and driving. "
Waze has really honed in on the traffic violation prevention, which I am sure many people appreciate.
In regards to the textbook—"Futuring"—I thought of what categories of DEGEST this trend may fall into. DEGEST is a means of classifying trends, with each letter referring to a different category.
The trend of increased traffic knowledge on navigation apps—while actively on the road—is an interesting one. In this case, I would say it most definitely falls into the T (Technology) category, but also the S (Society) category. This trend impacts society in that it makes us more informed drivers, but also allows us to avoid violations and be more efficient in our trips. As a whole, society will benefit from this, of course. But what are the costs...?
I'll explore that in my next post.
0 notes
bdawkins8 · 9 months ago
Text
Blog Post 8
Artifact: https://deepgram.com/ai-glossary/inference-engine
Tumblr media
Above is a diagram of how an inference engine works. Inference engines are said to be the core component of an artificial intelligence system (or expert system). This element of AI systems "mirrors human reasoning by interpreting data, inferring relationships, and reaching conclusions that guide decision-making processes" according to Deepgram.
In the diagram, we see both ends of the typical genAI system. In this case, the knowledge base and the "expert" are both generated by artificial intelligence. The knowledge base comes from the wealth of information that genAI draws from.
Our "User" or human person, inputs a query. Then, our expert, or genAI, uses the knowledge base, an inference engine, and the user interface to output advice to the user. What I found particularly interesting was that the inference engine was at the heart of this process, constantly interacting with the user interface and the knowledge base, as depicted by the sage green arrows in the photo.
Deepgram notes that inference engines serve as the intellectual engine of expert systems. In fact, inference engines process data as well as anticipate outcomes and inform actions as a means of stimulating human-like reasoning.
"New and improved ways were found to do things and old ways discarded. The result was an increasing ability to achieve human purposes," (Futuring, pg 30). While this quote is in the context of great transformations in human history, it also applies to expert systems.
The following quote ties these two ideas together: "Inference engines, therefore, stand at the crossroads of data and discernment, embodying the transformative power of AI to replicate and even surpass human cognitive functions in specialized tasks," said Deepgram.
0 notes