#distributed artificial intelligence robotics
Explore tagged Tumblr posts
Text

This Saturday, March 1, people will gather at Tesla outlets around the country to demonstrate against Elon Musk's efforts to consolidate autocracy. This is an opportunity to point to the origins of authoritarianism in the capitalist economy.
Here is a flier design you can print and distribute:
https://crimethinc.com/posters/depose-trump-depose-musk
Using the market to exploit us and state violence to control us, Elon Musk and Donald Trump are trying to consolidate power in the hands of a billionaire elite. They want to establish a totalitarian dystopia in which artificial intelligence does away with our livelihoods while killer robots keep us in line. By targeting undocumented people and trans people, they hope to channel anger towards those who are most vulnerable while they get away with murder.
There is no point looking to the Democrats or the court system for help. The Democrats identified Trump as a fascist, then welcomed him into office. Trump already controls the highest court in the land; his lackeys dominate the legislative branch. The only thing that could stop them is widespread resistance.
Get together with the people that you trust. Build networks with others who feel the way you feel. Identify weak points in the ruling order and look for ways to go on the offensive.
When Trump tried to impose the Muslim ban in 2017, thousands blocked airports across the country. When millions filled the streets in 2020, Trump lost control. We can deal with Elon Musk by cutting off his profits at the source. Start with Tesla.
135 notes
·
View notes
Text
Just going to put this out there, since it may not have occurred to people: we, as a species, can actually decide not to build artificial general super-intelligence, that is, we can decide not to have AI.
That doesn't mean that we will make that decision.
That doesn't mean that it wouldn't be a long, obnoxious, drawn-out process to get there.
However, existing human leaders have a shared interest in being on top of society. If an AI is on top of every society, then a human leader is not on top of that society. Human leaders can actually decide, collectively, if supported by a large number of smart people, to ban the manufacture and distribution of the technology, so that the largest it can get is what some guy in his garage can build. This can become the default.
Likewise, from the perspective of people who maintain their control socially, it can be better to distribute control over weapons systems rather than having an entirely robotic army. Either some general controls the robot army, in which case he can instantly overthrow the government, or the leader has the robot army, which means you're all fucked if he cannot retain his sanity.
It's the same for genetic engineering.
One, if you change every one of your kid's genes in order to turn him into a genetically-engineed superman, then he doesn't really have your genes anymore, so you're basically raising an adopted kid. Some people might do that, but it's likely to be disrupted intergenerationally.
Two, if every single person on Earth has the ability to raise their kid's IQ to 180, or to give them Hollywood / supermodel tier good looks, or to make them as charismatic as historical figures, then the children of powerful people will have to compete with that.
Genes aren't the only thing that determine what people are like, of course, and we don't know the extent of what's genetic and what's not. It's unclear if you even can guarantee an Einstein.
However, it's entirely possible for world leaders, hospitals, and institutions to settle around an equilibrium where only selecting from the genes people already have, with a few tweaks for conventional diseases, is considered ethical.
I am not saying that you should be against these technologies. I am also not saying that you should be in favor of these technologies.
People are going to tell you that both of these technologies are completely, 100% inevitable, that there is nothing you can do to stop them, because they are the inevitable future end state of humanity, due to the existence of competition.
I am telling you that you have a choice.
I am not saying that it will be an easy choice, or that it will be without risk. I am not saying that you will be able to completely eliminate such technologies, or that every result will be foreseeable.
However, humanity have already placed limits on conflict, including on tools and tactics. Absolute competition does not rule everything, there is not only one band of competition, and sometimes people cooperate, too.
To find and impose suitable limits will require flexibility and strength. A great deal of information about the nature of these technologies remains unknown, and will only be revealed in the decades to come. Those seeking to prevent a catastrophe will need to adapt as new circumstances arise; a ready-made framework could help to provide initial traction on the problem, but the underlying reality is likely to be unexpected, and therefore not fully accounted for.
11 notes
·
View notes
Text
Bayesian Active Exploration: A New Frontier in Artificial Intelligence
The field of artificial intelligence has seen tremendous growth and advancements in recent years, with various techniques and paradigms emerging to tackle complex problems in the field of machine learning, computer vision, and natural language processing. Two of these concepts that have attracted a lot of attention are active inference and Bayesian mechanics. Although both techniques have been researched separately, their synergy has the potential to revolutionize AI by creating more efficient, accurate, and effective systems.
Traditional machine learning algorithms rely on a passive approach, where the system receives data and updates its parameters without actively influencing the data collection process. However, this approach can have limitations, especially in complex and dynamic environments. Active interference, on the other hand, allows AI systems to take an active role in selecting the most informative data points or actions to collect more relevant information. In this way, active inference allows systems to adapt to changing environments, reducing the need for labeled data and improving the efficiency of learning and decision-making.
One of the first milestones in active inference was the development of the "query by committee" algorithm by Freund et al. in 1997. This algorithm used a committee of models to determine the most meaningful data points to capture, laying the foundation for future active learning techniques. Another important milestone was the introduction of "uncertainty sampling" by Lewis and Gale in 1994, which selected data points with the highest uncertainty or ambiguity to capture more information.
Bayesian mechanics, on the other hand, provides a probabilistic framework for reasoning and decision-making under uncertainty. By modeling complex systems using probability distributions, Bayesian mechanics enables AI systems to quantify uncertainty and ambiguity, thereby making more informed decisions when faced with incomplete or noisy data. Bayesian inference, the process of updating the prior distribution using new data, is a powerful tool for learning and decision-making.
One of the first milestones in Bayesian mechanics was the development of Bayes' theorem by Thomas Bayes in 1763. This theorem provided a mathematical framework for updating the probability of a hypothesis based on new evidence. Another important milestone was the introduction of Bayesian networks by Pearl in 1988, which provided a structured approach to modeling complex systems using probability distributions.
While active inference and Bayesian mechanics each have their strengths, combining them has the potential to create a new generation of AI systems that can actively collect informative data and update their probabilistic models to make more informed decisions. The combination of active inference and Bayesian mechanics has numerous applications in AI, including robotics, computer vision, and natural language processing. In robotics, for example, active inference can be used to actively explore the environment, collect more informative data, and improve navigation and decision-making. In computer vision, active inference can be used to actively select the most informative images or viewpoints, improving object recognition or scene understanding.
Timeline:
1763: Bayes' theorem
1988: Bayesian networks
1994: Uncertainty Sampling
1997: Query by Committee algorithm
2017: Deep Bayesian Active Learning
2019: Bayesian Active Exploration
2020: Active Bayesian Inference for Deep Learning
2020: Bayesian Active Learning for Computer Vision
The synergy of active inference and Bayesian mechanics is expected to play a crucial role in shaping the next generation of AI systems. Some possible future developments in this area include:
- Combining active inference and Bayesian mechanics with other AI techniques, such as reinforcement learning and transfer learning, to create more powerful and flexible AI systems.
- Applying the synergy of active inference and Bayesian mechanics to new areas, such as healthcare, finance, and education, to improve decision-making and outcomes.
- Developing new algorithms and techniques that integrate active inference and Bayesian mechanics, such as Bayesian active learning for deep learning and Bayesian active exploration for robotics.
Dr. Sanjeev Namjosh: The Hidden Math Behind All Living Systems - On Active Inference, the Free Energy Principle, and Bayesian Mechanics (Machine Learning Street Talk, October 2024)
youtube
Saturday, October 26, 2024
#artificial intelligence#active learning#bayesian mechanics#machine learning#deep learning#robotics#computer vision#natural language processing#uncertainty quantification#decision making#probabilistic modeling#bayesian inference#active interference#ai research#intelligent systems#interview#ai assisted writing#machine art#Youtube
6 notes
·
View notes
Text
Top 10 Emerging Tech Trends to Watch in 2025
Technology is evolving at an unprecedented tempo, shaping industries, economies, and day by day lifestyles. As we method 2025, several contemporary technology are set to redefine how we engage with the sector. From synthetic intelligence to quantum computing, here are the important thing emerging tech developments to look at in 2025.

Top 10 Emerging Tech Trends In 2025
1. Artificial Intelligence (AI) Evolution
AI remains a dominant force in technological advancement. By 2025, we will see AI turning into greater sophisticated and deeply incorporated into corporations and personal programs. Key tendencies include:
Generative AI: AI fashions like ChatGPT and DALL·E will strengthen similarly, generating more human-like textual content, images, and even films.
AI-Powered Automation: Companies will more and more depend upon AI-pushed automation for customer support, content material advent, and even software development.
Explainable AI (XAI): Transparency in AI decision-making becomes a priority, ensuring AI is greater trustworthy and comprehensible.
AI in Healthcare: From diagnosing sicknesses to robot surgeries, AI will revolutionize healthcare, reducing errors and improving affected person results.
2. Quantum Computing Breakthroughs
Quantum computing is transitioning from theoretical studies to real-global packages. In 2025, we will expect:
More powerful quantum processors: Companies like Google, IBM, and startups like IonQ are making full-size strides in quantum hardware.
Quantum AI: Combining quantum computing with AI will enhance machine studying fashions, making them exponentially quicker.
Commercial Quantum Applications: Industries like logistics, prescribed drugs, and cryptography will begin leveraging quantum computing for fixing complex troubles that traditional computer systems can not manage successfully.
3. The Rise of Web3 and Decentralization
The evolution of the net continues with Web3, emphasizing decentralization, blockchain, and user possession. Key factors consist of:
Decentralized Finance (DeFi): More economic services will shift to decentralized platforms, putting off intermediaries.
Non-Fungible Tokens (NFTs) Beyond Art: NFTs will find utility in actual estate, gaming, and highbrow belongings.
Decentralized Autonomous Organizations (DAOs): These blockchain-powered organizations will revolutionize governance systems, making choice-making more obvious and democratic.
Metaverse Integration: Web3 will further integrate with the metaverse, allowing secure and decentralized digital environments.
4. Extended Reality (XR) and the Metaverse
Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) will retain to improve, making the metaverse extra immersive. Key tendencies consist of:
Lighter, More Affordable AR/VR Devices: Companies like Apple, Meta, and Microsoft are working on more accessible and cushty wearable generation.
Enterprise Use Cases: Businesses will use AR/VR for far flung paintings, education, and collaboration, lowering the want for physical office spaces.
Metaverse Economy Growth: Digital belongings, digital real estate, and immersive studies will gain traction, driven via blockchain technology.
AI-Generated Virtual Worlds: AI will play a role in developing dynamic, interactive, and ever-evolving virtual landscapes.
5. Sustainable and Green Technology
With growing concerns over weather alternate, generation will play a vital function in sustainability. Some key innovations include:
Carbon Capture and Storage (CCS): New techniques will emerge to seize and keep carbon emissions efficaciously.
Smart Grids and Renewable Energy Integration: AI-powered clever grids will optimize power distribution and consumption.
Electric Vehicle (EV) Advancements: Battery generation upgrades will cause longer-lasting, faster-charging EVs.
Biodegradable Electronics: The upward thrust of green digital additives will assist lessen e-waste.
6. Biotechnology and Personalized Medicine
Healthcare is present process a metamorphosis with biotech improvements. By 2025, we expect:
Gene Editing and CRISPR Advances: Breakthroughs in gene modifying will enable treatments for genetic disorders.
Personalized Medicine: AI and big statistics will tailor remedies based on man or woman genetic profiles.
Lab-Grown Organs and Tissues: Scientists will make in addition progress in 3D-published organs and tissue engineering.
Wearable Health Monitors: More superior wearables will music fitness metrics in actual-time, presenting early warnings for illnesses.
7. Edge Computing and 5G Expansion
The developing call for for real-time statistics processing will push aspect computing to the vanguard. In 2025, we will see:
Faster 5G Networks: Global 5G insurance will increase, enabling excessive-velocity, low-latency verbal exchange.
Edge AI Processing: AI algorithms will system information in the direction of the source, reducing the want for centralized cloud computing.
Industrial IoT (IIoT) Growth: Factories, deliver chains, and logistics will advantage from real-time facts analytics and automation.
Eight. Cybersecurity and Privacy Enhancements
With the upward thrust of AI, quantum computing, and Web3, cybersecurity will become even more essential. Expect:
AI-Driven Cybersecurity: AI will come across and prevent cyber threats extra effectively than traditional methods.
Zero Trust Security Models: Organizations will undertake stricter get right of entry to controls, assuming no entity is inherently sincere.
Quantum-Resistant Cryptography: As quantum computer systems turn out to be greater effective, encryption techniques will evolve to counter potential threats.
Biometric Authentication: More structures will rely on facial reputation, retina scans, and behavioral biometrics.
9. Robotics and Automation
Automation will hold to disrupt numerous industries. By 2025, key trends encompass:
Humanoid Robots: Companies like Tesla and Boston Dynamics are growing robots for commercial and family use.
AI-Powered Supply Chains: Robotics will streamline logistics and warehouse operations.
Autonomous Vehicles: Self-using automobiles, trucks, and drones will become greater not unusual in transportation and shipping offerings.
10. Space Exploration and Commercialization
Space era is advancing swiftly, with governments and private groups pushing the boundaries. Trends in 2025 include:
Lunar and Mars Missions: NASA, SpaceX, and other groups will development of their missions to establish lunar bases.
Space Tourism: Companies like Blue Origin and Virgin Galactic will make industrial area travel more reachable.
Asteroid Mining: Early-level research and experiments in asteroid mining will start, aiming to extract rare materials from area.
2 notes
·
View notes
Text


High-Technology Shipping refers to the use of advanced technology in the shipping and logistics industry to enhance efficiency, security, and sustainability in cargo transportation. It incorporates innovations such as automation, artificial intelligence (AI), the Internet of Things (IoT), blockchain, and eco-friendly energy sources.
In high-tech shipping, cargo vessels can be equipped with IoT sensors for real-time monitoring, AI-driven route optimization, and blockchain for supply chain transparency. Additionally, technologies like autonomous ships, drone deliveries, and robotic systems at ports are increasingly used to streamline and accelerate global shipping processes.
The advantages of high-tech shipping include reduced operational costs, faster delivery times, improved security, and a lower environmental impact through the use of alternative fuels and smart navigation systems.
This concept is becoming increasingly vital in global trade, enabling faster and more reliable distribution of goods worldwide.
2 notes
·
View notes
Text


I think if people like this character and I see an asset, I'll make a comic about them . His name is Bon-Bon, he works in a cafe where he loves to cook all kinds of baked goods (he also loves baked goods a lot). He is introverted, so he usually communicates through a robot, which he considers his son and is called Rob, Rob is endowed with artificial intelligence and distributes food to the customers
#artists on tumblr#skeleton#sans#undertale#undertale au#sans oc#original character#artwork#art#digital art#illustration
19 notes
·
View notes
Text



HE-RObot (2008) by Heathkit Educational Systems, Benton Harbor, MI. In December 2007, Heathkit announced it would distribute an educational version of White Box Robotics' PC-BOT to be known as the HE-RObot. “Whether you and your students want to write custom algorithms for artificial intelligence or just want an entertaining learning tool, Heathkit's HE-RObot™ can do the job. Built with a compact design for optimal mobility, HE-RObot™ utilizes industry standard PC hardware and software providing the most versatile and user-friendly robot on the market today. Controlled by remote or set to move autonomously, it is capable of sensing its environment, reacting and moving around obstacles. HE-RObot™ can talk, see, hear, transmit receive and process data, or if you like, just play a CD. By being able to do just about anything, HE-RObot™ provides students and teachers with a friendly, personable technological experience.”
41 notes
·
View notes
Text
The tenured engineers of 2024
New Post has been published on https://thedigitalinsider.com/the-tenured-engineers-of-2024/
The tenured engineers of 2024


In 2024, MIT granted tenure to 11 faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Aeronautics and Astronautics, Chemical Engineering, Civil and Environmental Engineering, Electrical Engineering and Computer Science (EECS, which reports jointly to the School of Engineering and MIT Schwarzman College of Computing), Mechanical Engineering, and Nuclear Science and Engineering.
“My heartfelt congratulations to the 11 engineering faculty members on receiving tenure. These faculty have already made a lasting impact in the School of Engineering through both advances in their field and their dedication as educators and mentors,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.
This year’s newly tenured engineering faculty include:
Adam Belay, associate professor of computer science and principal investigator at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), works on operating systems, runtime systems, and distributed systems. He is particularly interested in developing practical methods for microsecond-scale computing and cloud resource management, with many applications relating to performance and computing efficiency within large data centers.
Irmgard Bischofberger, Class of 1942 Career Development Professor and associate professor of mechanical engineering, is an expert in the mechanisms of pattern formation and instabilities in complex fluids. Her research reveals new insights into classical understanding of instabilities and has wide relevance to physical systems and industrial processes. Further, she is dedicated to science communication and generates exquisite visualizations of complex fluidic phenomena from her research.
Matteo Bucci serves as the Esther and Harold E. Edgerton Associate Professor of nuclear science and engineering. His research group studies two-phase heat transfer mechanisms in nuclear reactors and space systems, develops high-resolution, nonintrusive diagnostics and surface engineering techniques to enhance two-phase heat transfer, and creates machine-learning tools to accelerate data analysis and conduct autonomous heat transfer experiments.
Luca Carlone, the Boeing Career Development Professor in Aeronautics and Astronautics, is head of the Sensing, Perception, Autonomy, and Robot Kinetics Laboratory and principal investigator at the Laboratory for Information and Decision Systems. His research focuses on the cutting edge of robotics and autonomous systems research, with a particular interest in designing certifiable perception algorithms for high-integrity autonomous systems and developing algorithms and systems for real-time 3D scene understanding on mobile robotics platforms operating in the real world.
Manya Ghobadi, associate professor of computer science and principal investigator at CSAIL, builds efficient network infrastructures that optimize resource use, energy consumption, and availability of large-scale systems. She is a leading expert in networks with reconfigurable physical layers, and many of the ideas she has helped develop are part of real-world systems.
Zachary (Zach) Hartwig serves as the Robert N. Noyce Career Development Professor in the Department of Nuclear Science and Engineering, with a co-appointment at MIT’s Plasma Science and Fusion Center. His current research focuses on the development of high-field superconducting magnet technologies for fusion energy and accelerated irradiation methods for fusion materials using ion beams. He is a co-founder of Commonwealth Fusion Systems, a private company commercializing fusion energy.
Admir Masic, associate professor of civil and environmental engineering, focuses on bridging the gap between ancient wisdom and modern material technologies. He applies his expertise in the fields of in situ and operando spectroscopic techniques to develop sustainable materials for construction, energy, and the environment.
Stefanie Mueller is the TIBCO Career Development Professor in the Department of EECS. Mueller has a joint appointment in the Department of Mechanical Engineering and is a principal investigator at CSAIL. She develops novel hardware and software systems that give objects new capabilities. Among other applications, her lab creates health sensing devices and electronic sensing devices for curved surfaces; embedded sensors; fabrication techniques that enable objects to be trackable via invisible marker; and objects with reprogrammable and interactive appearances.
Koroush Shirvan serves as the Atlantic Richfield Career Development Professor in Energy Studies in the Department of Nuclear Science and Engineering. He specializes in the development and assessment of advanced nuclear reactor technology. He is currently focused on accelerating innovations in nuclear fuels, reactor design, and small modular reactors to improve the sustainability of current and next-generation power plants. His approach combines multiple scales, physics and disciplines to realize innovative solutions in the highly regulated nuclear energy sector.
Julian Shun, associate professor of computer science and principal investigator at CSAIL, focuses on the theory and practice of parallel and high-performance computing. He is interested in designing algorithms that are efficient in both theory and practice, as well as high-level frameworks that make it easier for programmers to write efficient parallel code. His research has focused on designing solutions for graphs, spatial data, and dynamic problems.
Zachary P. Smith, Robert N. Noyce Career Development Professor and associate professor of chemical engineering, focuses on the molecular-level design, synthesis, and characterization of polymers and inorganic materials for applications in membrane-based separations, which is a promising aid for the energy industry and the environment, from dissolving olefins found in plastics or rubber, to capturing smokestack carbon dioxide emissions. He is a co-founder and chief scientist of Osmoses, a startup aiming to commercialize membrane technology for industrial gas separations.
#2024#3d#Aeronautical and astronautical engineering#aeronautics#Algorithms#Analysis#applications#approach#artificial#Artificial Intelligence#assessment#autonomous systems#Awards#honors and fellowships#Boeing#carbon#Carbon dioxide#carbon dioxide emissions#career#career development#chemical#Chemical engineering#Civil and environmental engineering#classical#Cloud#code#college#communication#computer#Computer Science
2 notes
·
View notes
Text
The Art And Science Of Hair Transplants

Hair loss can deeply impact self-esteem and quality of life, prompting many to seek solutions like hair transplants. Beyond merely restoring hair, modern hair transplantation blends meticulous surgical techniques with artistic principles to achieve natural-looking results. This exploration delves into the fusion of art and science in hair transplants, examining how advancements in technology and surgical expertise have transformed the field and for more information visit Hair Transplant Brighton.
The Science Behind Hair Transplants:
Hair transplantation is rooted in the understanding of hair growth and follicular dynamics. The two primary techniques, Follicular Unit Transplantation (FUT) and Follicular Unit Extraction (FUE), differ in how donor hair follicles are harvested and transplanted. FUT involves removing a strip of scalp from the donor area, dissecting it into individual follicular units under a microscope, and then transplanting them to the recipient site. FUE, on the other hand, uses a punch tool to extract individual follicular units directly from the scalp, which are then transplanted one by one. Both techniques aim to relocate hair follicles from areas of dense hair growth (donor sites) to areas experiencing hair loss or thinning (recipient sites), ensuring natural-looking results.
Artistry in Hairline Design:
Creating a natural hairline is where the artistry of hair transplantation truly shines. Surgeons meticulously design the hairline to complement facial features, ethnicity, and age, taking into account natural hair growth patterns and density. The angle, direction, and distribution of transplanted follicles are carefully planned to mimic the natural flow of hair, ensuring seamless integration with existing hair and a balanced aesthetic appearance. This artistic approach requires both technical skill and an understanding of the patient's aesthetic goals, resulting in a hairline that enhances facial symmetry and restores confidence.
Technological Advancements:
Advancements in technology have revolutionized the precision and efficiency of hair transplant procedures. Robotic systems like ARTAS utilize artificial intelligence to assist surgeons in harvesting and transplanting follicular units with unparalleled accuracy. These systems enhance the speed and accuracy of follicle extraction, reduce trauma to donor areas, and optimize graft survival rates. Additionally, advanced imaging techniques such as high-definition cameras and digital mapping tools enable surgeons to visualize the scalp in greater detail, facilitating precise graft placement and enhancing overall procedural outcomes.
Patient-Centered Care:
Beyond technical proficiency, patient-centered care is integral to successful hair transplants. Surgeons conduct thorough consultations to understand each patient's unique concerns, preferences, and expectations. They educate patients about available treatment options, discuss realistic outcomes, and tailor treatment plans to achieve individualized aesthetic goals. Throughout the process, from pre-operative preparation to post-operative care, patient comfort, safety, and satisfaction remain paramount. This personalized approach fosters trust and collaboration between patients and their medical team, ensuring a positive experience and optimal results.
Post-Operative Recovery and Follow-Up:
Post-operative care plays a crucial role in the success of hair transplants. Patients are provided with detailed instructions on caring for the transplant site, managing discomfort, and promoting healing. While recovery times vary, most individuals can resume normal activities within a few days to weeks following surgery. Regular follow-up appointments allow surgeons to monitor graft growth, assess healing progress, and address any concerns or questions that may arise. Continued support and guidance throughout the recovery process contribute to long-term satisfaction and confidence in the results of hair transplantation.
Ethical Considerations:
Ethical considerations in hair transplantation encompass informed consent, patient autonomy, and transparency regarding potential risks and benefits. Surgeons adhere to professional guidelines and ethical standards to ensure that patients make informed decisions based on accurate information. They prioritize patient well-being and strive to achieve natural-looking results while managing expectations realistically. Ethical practice also involves ongoing education and training to stay abreast of advancements in the field, uphold professional integrity, and provide compassionate care to individuals seeking hair restoration.
Conclusion:
The art and science of hair transplants epitomize the intersection of technical expertise and artistic vision, transforming lives by restoring natural hair growth and enhancing self-confidence. Through meticulous surgical techniques, advancements in technology, and patient-centered care, hair transplantation offers effective solutions for individuals experiencing hair loss. Surgeons combine scientific knowledge with artistic skill to design natural-looking hairlines that harmonize with facial features and reflect each patient's unique identity. As the field continues to evolve with research, innovation, and ethical practice, the promise of hair transplants remains steadfast in providing enduring solutions and renewed confidence to those seeking to reclaim their crowning glory.
Contact Us:
3rd Floor, Queensberry House,
106 Queens Rd, Brighton and Hove,
Brighton BN1 3XF
Phone: 020 8088 2393
Google map: https://maps.app.goo.gl/GvEhwpk5GYbPKaYj8
2 notes
·
View notes
Text
Robot Dreams (2023, Spain/France)
There exists an assumption that one has to be an animator in order to direct an animated film. While most cinephiles might reflexively point to Wes Anderson (2009’s Fantastic Mr. Fox, 2018’s Isle of Dogs), I think Isao Takahata (1988’s Grave of the Fireflies, 1991’s Only Yesterday) the exemplar here. Even so, a non-animator taking the reins of an animated movie is rare. Into that fold steps Pablo Berger, in this adaptation of Sara Varon’s graphic novel Robot Dreams. Moved after reading Varon’s work in 2010, Berger acquired Varon’s “carte blanche” permission to make a 2D animated adaptation however he saw fit. Like the graphic novel, Berger’s Robot Dreams is also dialogue-free.
Beginning production on Robot Dreams proved difficult. Berger originally teamed with Ireland’s Cartoon Saloon (2009’s The Secret of Kells, 2020’s Wolfwalkers) to make Robot Dreams, but these plans fell wayside when the COVID-19 pandemic hit. His schooling in how to make an animated film would come quickly. Despite an increased appetite for Spanish animation worldwide (2019’s Klaus, 2022’s Unicorn Wars), poor distribution and marketing of domestically-made animated movies has often meant Spanish animators have roved around Europe looking for work. With a pandemic sending those Spanish animators home, Berger and his Spanish and French producers set up “pop-up studios” in Madrid and Pamplona, purchased the infrastructure and space needed to make an animated feature, and recruited and hired animators. Berger’s admiration of animated film fuses the lessons of silent film acting (Berger made a gorgeous silent film in 2012’s Blancanieves; in interviews, Berger cites Charlie Chaplin’s movies as having the largest influence on Robot Dreams, alongside Takahata’s films) to result in one of the most emotionally honest films of the decade thus far – animated or otherwise.
Somewhere in Manhattan in the late 1980s in a world populated entirely of anthropomorphized animals, we find ourselves in Dog’s apartment. Dog, alone in this world, consuming yet another TV dinner, is channel surfing late one evening. He stumbles upon a commercial advertising a robot companion. Intrigued, he orders the robot companion and, with some difficulty, assembles Robot. The two become fast friends as they romp about New York City over a balmy summer, complete with walks around their neighborhood and Central Park, street food, trips to Coney Island, and roller blading along to the groovy tunes of Earth, Wind & Fire. At summer’s end, an accident sees the involuntary separation of Dog and Robot, endangering, for all that the viewer can assume, the most meaningful friendship in Dog’s life and Robot’s brief time of existence.
youtube
If you have not seen the film yet, let me address a popular perception early on in this piece. Set in a mostly-analog 1980s, Robot Dreams contains none of the agonizing over artificial intelligence or automatons in fashion in modern cinema. There is no commentary about how technology frays an individual’s connections to others. Robot is a rudimentary creation, closer to a sentient grade school science project than a Data or T-1000.
So what is Robot Dreams saying instead? Principally, it is about the loving bonds of friendship – how a friend can provide comfort and company, how they uplift the best parts of your very being. For Robot, the entirety of their life prior to the aforementioned accident (something that I, for non-viewers, am trying not to spoil as Robot Dreams’ emotional power is fully experienced if you know as little as possible) has been one of complete estival bliss. Robot, in due time, discovers that one of the most meaningful aspects of friendship is that such relationships will eventually conclude – a fundamental part of life. And for Dog, Robot’s entrance into his life allows him to realize that, yes, he can summon the courage to connect with his fellow animals, realizing his self-worth. Perhaps Dog gives up addressing the accident a little too easily, but the separation of friends has a way of complicating emotions and provoking peculiar reactions.
On occasion, Robot Dreams’ spirit reminds me of Charlie Chaplin’s silent feature film period (1921-1936) – in which Chaplin, at the height of his filmmaking prowess, most successfully wove together slapstick comedy and pathos. On paper, pathos and slapstick should not mix, but Chaplin was the master of combining the two. No wonder Berger fully acknowledges the influence of his favorite Chaplin work, City Lights (1931), here.
Across Robot Dreams, Berger inserts an absurd visual humor that works both because almost all of the characters are animals and despite the fact almost everyone is an animal. A busking octopus in the New York City subway? Check. The image of pigs playing on the beach while sunburnt to a blazing red? You bet. A dancing dream sequence where one of our lead characters finds himself in The Wizard of Oz performing Busby Berkeley-esque choreography on the Yellow Brick Road? Why not? Much of Chaplin’s silent film humor didn’t come from his Little Tramp character, but the silliness, ego, and/or absentmindedness of all those surrounding the Tramp. In City Lights, humor also came from the rough-and-tumble edges of urban America. Such is the case, too, in Robot Dreams, with its blemished, trash-strewn depiction of late ‘80s New York (credit must also go to the sound mix, as they perfectly capture how ambiently noisy a big city can be).
Amid all that comedy, Berger nails the balance between the pathos and the hilarity – pushing too far in either direction would easily undermine the other. The film’s melancholy shows up in ostensibly happy moments and places of recreation: a realization during a rooftop barbeque lunch, the emptiness of a shuttered Coney Island beach in the winter, and an afternoon of kiting in Central Park. It captures how our thoughts of erstwhile or involuntarily separated friends come to us innocuously, in places that stir memories that we might, in our present company, might not speak of aloud.
As the film’s third character, New York City (where Berger lived for a decade) is a global cultural capital, a citywide theater of dreams, a skyscraper-filled signature to the American Dream. To paraphrase Sinatra, if you can make it there, you can make it anywhere. But it tends to grind those dreams into dust. The city’s bureaucratic quagmire is lampooned here, as is its reputation for mean-spirited or jaded locals. Robot Dreams also depicts the visual and socioeconomic differences between the city’s boroughs. With such a jumble of folks of different life stations mashed together, Dog’s people-watching, er, animal-watching during his loneliest moments makes him feel the full intensity of his social isolation. With Robot, however, Dog has a naïve companion that he can show the best of the city to. Robot has no understanding of passive-aggressive or outright hostile behavior (see: Robot hilariously not understanding what a middle finger salute is – the only objectionable scene if you are considering showing this to younger viewers). Within this city of contradictions, Dog and Robot’s love is here to stay.
Though he is no animator, his experience in guiding Spanish actresses Ángela Molina, Maribel Verdú, and Macarena García in Blancanieves through a silent film was valuable. In animated film, there is a tendency towards overexaggerating emotions. But with Robot Dreams’ close adaptation of the graphic novel’s ligne claire style and the nature of Robot’s face, the typical level of exaggeration in animation could not fly in Robot Dreams. Berger and storyboard artist Maca Gil (2022’s My Father’s Dragon, the 2023 Peanuts special One-of-a-Kind Marcie) made few alterations to the storyboards, fully knowing how they wished to frame the film, and hoping to convey the film’s emotions with the facial subtlety seen in the graphic novel. Character designer Daniel Fernandez Casas (Klaus, 2024’s IF) accomplishes this with a minimum of lines to outline characters’ bodies and faces. Meanwhile, art director José Luis Ágreda (2018’s Buñuel in the Labyrinth of the Turtles) and animation director Benoît Féroumont (primarily a graphic novelist) visually translated Sara Varon’s graphic novel using flat colors and a lack of shading to convey background and character depth (one still needs shading, of course, to convey lights and darks of an interior or exterior).
Robot Dreams’ nomination for the Academy Award for Best Animated Feature this year was one of the most pleasant surprises of the 96th Academy Awards. In North America, Robot Dreams’ distributor, Neon, has pursued an inexplicable distribution and marketing strategy of not allowing the film a true theatrical release until months after the end of the last Oscars. The film was available for a one-night special screening in select theaters in and near major North American cities the Wednesday before the Academy Awards. And only now (as of the weekend of May 31, 2024), Neon will release Robot Dreams this weekend in two New York City theaters, the following weekend in and around Los Angeles, with few other locations confirmed – well after interest to watch the film theatrically piqued in North America.
Alongside Neon’s near-nonexistent distribution and marketing of Jonas Poher Rasmussen's animated documentary Flee (2021, Denmark), one has to question Neon’s commitment to animated features and whether the company has a genuine interest in showing their animated acquisitions to people outside major North American cities. This is distributional malpractice and maddeningly disrespectful from one of the most acclaimed independent distributors of the last decade.
In Robot Dreams, Pablo Berger and his crew made perhaps the best animated feature of the previous calendar year. Robot Dreams might not have the artistic sumptuousness of the best anime films today, nor the digital polish one expects from the work of a major American animation studio. By film’s end, its simple, accessible style cannot hide its irrepressible emotional power. Its conclusion speaks to all of us who silently wonder about close friends long left to the past, their absence filled only by memory.
My rating: 8.5/10
^ Based on my personal imdb rating. My interpretation of that ratings system can be found in the “Ratings system” page on my blog. Half-points are always rounded down.
For more of my reviews tagged “My Movie Odyssey”, check out the tag of the same name on my blog.
#Robot Dreams#Pablo Berger#Sara Varon#Fernando Franco#Daniel Fernandez Casas#Benoît Feroumont#José Luis Ágreda#Maca Gil#Ibon Cormenzana#Ignasi Estapé#Sandra Tapia Diaz#Best Animated Feature#Oscars#96th Academy Awards#My Movie Odyssey
2 notes
·
View notes
Text
The Rise of AI Technology in the Music Industry: Innovation or Threat?
What is it?
The growth of artificial intelligence (AI) technology in the music industry has sparked both interest and concern. This aims to shed light on the intricate relationship between AI technology and the essence of musical creation by taking into account the opinions of musicians, industry professionals, and technology experts. Ultimately, it asks whether AI represents an adverse impact or a harmonious evolution in the music industry, as artificial intelligence or AI technology has played a significant role in the advancement of the music industry in the last few years. As time passes, AI technology are becoming more utilized in every step of the recording, distribution, and listening of music. The controversy whether AI technology is pushing the music industry ahead or a threat to the traditional way of creating music has been raised by the growing integration of AI in the industry.
Artificial Intelligence X Music Collab?
Traditionally, music production required expensive equipment and a skilled professional to augment the creative process. AI powered tools enable professional ro amateur music producers to freely create professional-sounding tracks with minimal resources. AI can aid in mixing, mastering, and even generate realistic voice recording. This democratization in music production creates a diverse and empowering realm for artists who may have had access to traditional production resources.
In contrast, the use of AI in music production raises concerns regarding the possibility of decrease of an artist's authenticity. Is it possible for a machine to accurately convey the human feeling through music? According to critics, this could result from an over-reliance on AI, with algorithms prioritizing formulae that are economically successful over the unfiltered, and genuine passion that artists and musicians bring to the table.
While AI can undoubtedly do the whole creative process of creating music, it lacks the essence of human-like touch to it. The raw emotion of musicians embedded in every piece of art that they do is certainly irreplaceable. The challenge for the music industry is to find balance between using AI as a tool for innovation and preserving the authenticity of the art form.
Conclusion
We agree with using AI technology since it has many advantages and makes our work lives easier. From its efficiency to quality enhancements.
AI technology provides the music industry with a number of advantages, since it can have a significant impact on our careers and open up countless opportunities for us. Similarly, on improving creativity and making it more accessible, as AI transforms the process of making music,. However, this does not imply that we should be relying on AI technology. It has potential drawbacks as it is made by a robot and not through a human’s passion. For instance, it has no human touch; it lacks emotions. And the dependency on technology leads to the decline of traditional music skills, as trusting in automated works loses the musical intuition of a human produce. The further we go in understanding and learning the pros and cons of AI technology, in the point-of-view of the music industry, it'll leave as a threat, as human passion through music won't be present, as music is known for connecting people to one another emotionally.
2 notes
·
View notes
Text
AI’s Second Chance: How Geometric Deep Learning Can Help Heal Silicon Valley’s Moral Wounds
The concept of AI dates back to the early 20th century, when scientists and philosophers began to explore the possibility of creating machines that could think and learn like humans. In 1929, Makoto Nishimura, a Japanese professor and biologist, created the country's first robot, Gakutensoku, which symbolized the idea of "learning from the laws of nature." This marked the beginning of a new era in AI research. In the 1930s, John Vincent Atanasoff and Clifford Berry developed the Atanasoff-Berry Computer (ABC), a 700-pound machine that could solve 29 simultaneous linear equations. This achievement laid the foundation for future advancements in computational technology.
In the 1940s, Warren S. McCulloch and Walter H. Pitts Jr introduced the Threshold Logic Unit, a mathematical model for an artificial neuron. This innovation marked the beginning of artificial neural networks, which would go on to play a crucial role in the development of modern AI. The Threshold Logic Unit could mimic a biological neuron by receiving external inputs, processing them, and providing an output, as a function of input. This concept laid the foundation for the development of more complex neural networks, which would eventually become a cornerstone of modern AI.
Alan Turing, a British mathematician and computer scientist, made significant contributions to the development of AI. His work on the Bombe machine, which helped decipher the Enigma code during World War II, laid the foundation for machine learning theory. Turing's 1950 paper, "Computing Machinery and Intelligence," proposed the Turing Test, a challenge to determine whether a machine could think. This test, although questioned in modern times, remains a benchmark for evaluating cognitive AI systems. Turing's ideas about machines that could reason, learn, and adapt have had a lasting impact on the field of AI.
The 1950s and 1960s saw a surge in AI research, driven by the development of new technologies and the emergence of new ideas. This period, known as the "AI summer," was marked by rapid progress and innovation. The creation of the first commercial computers, the development of new programming languages, and the emergence of new research institutions all contributed to the growth of the field. The AI summer saw the development of the first AI programs, including the Logical Theorist, which was designed to simulate human reasoning, and the General Problem Solver, which was designed to solve complex problems.
The term "Artificial Intelligence" was coined by John McCarthy in 1956, during the Dartmouth Conference, a gathering of computer scientists and mathematicians. McCarthy's vision was to create machines that could simulate human intelligence, and he proposed that mathematical functions could be used to replicate human intelligence within a computer. This idea marked a significant shift in the field, as it emphasized the potential of machines to learn and adapt. McCarthy's work on the programming language LISP and his concept of "Timesharing" and distributed computing laid the groundwork for the development of the Internet and cloud computing.
By the 1970s and 1980s, the AI field began to experience a decline, known as the "AI winter." This period was marked by a lack of funding, a lack of progress, and a growing skepticism about the potential of AI. The failure of the AI program, ELIZA, which was designed to simulate human conversation, and the lack of progress in developing practical AI applications contributed to the decline of the field. The AI winter lasted for several decades, during which time AI research was largely relegated to the fringes of the computer science community.
The AI Winter was caused by a combination of factors, including overhyping and unrealistic expectations, lack of progress, and lack of funding. In the 1960s and 1970s, AI researchers had predicted that AI would revolutionize the way we live and work, but these predictions were not met. As one prominent AI researcher, John McCarthy, noted, "The AI community has been guilty of overpromising and underdelivering". The lack of progress in AI research led to a decline in funding, as policymakers and investors became increasingly skeptical about the potential of AI.
One of the primary technical challenges that led to the decline of rule-based systems was the difficulty of hand-coding rules. As the AI researcher, Marvin Minsky, noted, "The problem with rule-based systems is that they require a huge amount of hand-coding, which is time-consuming and error-prone". This led to a decline in the use of rule-based systems, as researchers turned to other approaches, such as machine learning and neural networks.
The personal computer revolutionized the way people interacted with technology, and it had a significant impact on the development of AI. The personal computer made it possible for individuals to develop their own software without the need for expensive mainframe computers, and it enabled the development of new AI applications.
The first personal computer, the Apple I, was released in 1976, and it was followed by the Apple II in 1977. The IBM PC was released in 1981, and it became the industry standard for personal computers.
The AI Winter had a significant impact on the development of AI, and it led to a decline in interest in AI research. However, it also led to a renewed focus on the fundamentals of AI, and it paved the way for the development of new approaches to AI, such as machine learning and deep learning. These approaches were developed in the 1980s and 1990s, and they have since become the foundation of modern AI.
As AI research began to revive in the late 1990s and early 2000s, Silicon Valley's tech industry experienced a moral decline. The rise of the "bro culture" and the prioritization of profits over people led to a series of scandals, including:
- The dot-com bubble and subsequent layoffs.
- The exploitation of workers, particularly in the tech industry.
- The rise of surveillance capitalism, where companies like Google and Facebook collected vast amounts of personal data without users' knowledge or consent.
This moral decline was also reflected in the increasing influence of venture capital and the prioritization of short-term gains over long-term sustainability.
Geometric deep learning is a key area of research in modern AI, and its development is a direct result of the revival of AI research in the late 1990s and early 2000s. It has the potential to address some of the moral concerns associated with the tech industry. Geometric deep learning methods can provide more transparent and interpretable results, which can help to mitigate the risks associated with AI decision-making. It can be used to develop more fair and unbiased AI systems, which can help to address issues of bias and discrimination in AI applications. And it can be used to develop more sustainable AI systems, which can help to reduce the environmental impact of AI research and deployment.
Geometric deep learning is a subfield of deep learning that focuses on the study of geometric structures and their representation in data. This field has gained significant attention in recent years, particularly in applications such as object detection, segmentation, tracking, robot perception, motion planning, control, social network analysis and recommender systems.
While Geometric Deep Learning is not a direct solution to the moral decline of Silicon Valley, it has the potential to address some of the underlying issues and promote more responsible and sustainable AI research and development.
As AI becomes increasingly integrated into our lives, it is essential that we prioritize transparency, accountability, and regulation to ensure that AI is used in a way that is consistent with societal values.
Transparency is essential for building trust in AI, and it involves making AI systems more understandable and explainable. Accountability is essential for ensuring that AI is used responsibly, and it involves holding developers and users accountable for the impact of AI. Regulation is essential for ensuring that AI is used in a way that is consistent with societal values, and it involves developing and enforcing laws and regulations that govern the development and use of AI.
Policymakers and investors have a critical role to play in shaping the future of AI. They can help to ensure that AI is developed and used in a way that is consistent with societal values by providing funding for AI research, creating regulatory frameworks, and promoting transparency and accountability.
The future of AI is uncertain, but it is clear that AI will continue to play an increasingly important role in society. As AI continues to evolve, it is essential that we prioritize transparency, accountability, and regulation to ensure that AI is used in a way that is consistent with societal values.
Prof. Gary Marcus: The AI Bubble - Will It Burst, and What Comes After? (Machine Learning Street Talk, August 2024)
youtube
Prof. Gary Marcus: Taming Silicon Valley (Machine Learning Street Talk, September 2024)
youtube
LLMs Cannot Reason (TheAIGRID, October 2024)
youtube
Geometric Deep Learning Blueprint (Machine Learning Street Talk, September 2021)
youtube
Max Tegmark’s Insights on AI and The Brain (TheAIGRID, November 2024)
youtube
Michael Bronstein: Geometric Deep Learning - The Erlangen Programme of ML (Imperial College London, January 2021)
youtube
This is why Deep Learning is really weird (Machine Learning Street Talk, December 2023)
youtube
Michael Bronstein: Geometric Deep Learning (MLSS Kraków, December 2023)
youtube
Saturday, November 2, 2024
#artificial intelligence#machine learning#deep learning#geometric deep learning#tech industry#transparency#accountability#regulation#ethics#ai history#ai development#talk#conversation#presentation#ai assisted writing#machine art#Youtube
2 notes
·
View notes
Text
Earlier this month, OpenAI’s board abruptly fired its popular CEO, Sam Altman. The ouster shocked the tech world and rankled Altman’s loyal employees, the vast majority of whom threatened to quit unless their boss was reinstated. After a chaotic five-day exile, Altman got his old job back—with a reconfigured, all-male board overseeing him, led by ex-Salesforce CEO and former Twitter board chair Bret Taylor.
Right now, only three people sit on this provisional OpenAI board. (More are expected to join.) Immediately prior to the failed coup, there were six. Altman and OpenAI cofounders Greg Brockman and Ilya Sutskever sat alongside Quora CEO Adam D’Angelo; AI safety researcher Helen Toner; and Tasha McCauley, a robotics engineer who leads a 3D-mapping startup.
The specifics of the boardroom overthrow attempt remain a mystery. Of those six, D’Angelo is the only one left standing. In addition to Taylor, the other new board member is former US Treasury secretary Larry Summers, a living emblem of American capitalism who notoriously said in 2005 that innate differences in the sexes may explain why fewer women succeed in STEM careers (he later apologized).
While Altman, Brockman, and Sutskever all still work at OpenAI despite their absence from the board, Toner and McCauley—the two women who sat on the board—are now cut off from the company. As the artificial intelligence startup moves forward, the stark gender imbalance of its revamped board illustrates the precarious position of women in AI.
“What this underscores is that there aren’t enough women in the mix to begin with,” says Margaret O’Mara, a University of Washington history professor and author of The Code: Silicon Valley and the Remaking of America. For O’Mara, the new board reflects Silicon Valley’s power structure, signaling that it’s “back to business” for the world’s most influential AI company—if back to business means a return to the Big Tech boys’ club. (Worth noting that when it was founded in 2015, OpenAI only had two board members: Altman and Elon Musk.)
Prominent AI researcher Timnit Gebru, who was fired by Google in late 2020 over a dispute about a research paper involving critical analysis of large language models, has been floated in the media as a potential board candidate. She is, indeed, a leader in responsible AI; post-Google, she founded the Distributed AI Research Institute, which describes itself as a space where “AI is not inevitable, its harms are preventable.” If OpenAI wanted to signal that it is still committed to AI safety, Gebru would be a savvy choice. Also an impossible one: She does not want a seat on the board of directors.
“It’s repulsive to me,” says Gebru. “I honestly think there’s more of a chance that I would go back to Google—I mean, they won’t have me and I won’t have them—than me going to OpenAI.”
The lack of women in the AI field has been an issue for years; in 2018, WIRED estimated that only 12 percent of leading machine learning researchers were women. In 2020, the World Economic Forum found that only 26 percent of data and AI positions in the workforce are held by women. “AI is very imbalanced in terms of gender,” says Sasha Luccioni, an AI ethics researcher at HuggingFace. “It’s not a very welcoming field for women.”
One of the areas where women are flourishing within the AI industry is in the world of ethics and safety, which Luccioni views as comparatively inclusive. She also sees it as significant that the ousted board members reportedly clashed with Altman over OpenAI’s mission. According to The New York Times, Toner and Altman had bickered over a research paper she published with coauthors in October that Altman interpreted as critical of the company. Luccioni believes that in addition to highlighting gender disparities, this incident also demonstrates how voices advocating for ethical considerations are getting hushed.
“I don’t think they got fired because they’re women,” Luccioni says. “I think they got fired because they highlighted an issue.” (Technically, both women agreed to leave the board.)
No matter what actually spurred the conflict at OpenAI, the way in which it was resolved, with Altman back at the helm and his dissenters out, has played into a narrative: Altman emerging as victor, flanked by loyalists and boosters. His board is now stocked with men eager to commercialize OpenAI’s products, not rein in its technological ambition. (One recent headline capturing this perspective: “AI Belongs to the Capitalists Now.”) Caution espoused by female leadership at least appears to have lost.
O’Mara sees the all-male OpenAI board as a sign of a swinging cultural pendulum. Just as some Silicon Valley tech companies have been working to correct their woeful track records in diversity and consider their environmental footprints, others have recoiled against “wokism” in various forms, instead espousing hard-nosed beliefs about work culture.
“It’s this sentiment around, ‘OK, we’re done being touchy-feely,’” she says. “Whether it’s Elon Musk’s ‘extremely hardcore’ demands or Marc Andreessen’s recent manifesto, the idea is that if you’re calling for people to take a pause and consider potential harms or complaining about the lack of representation, that is orthogonal to their business.”
OpenAI is reportedly planning to expand the board soon, and speculation is rampant about who will join. Its conspicuously all-male and all-white makeup certainly did not go unnoticed, and OpenAI is already looking at prospects who might placate some critics. According to a Bloomberg report, philanthropist Laurene Powell Jobs, former Yahoo CEO Marissa Mayer, and former US Secretary of State Condoleezza Rice were all considered but not selected.
At the time of publication, OpenAI had not responded to repeated requests for comment.
For many onlookers, it’s crucial to choose someone who will advocate balancing ambition with safety and responsibility—someone whose line of inquiry might match that of Toner, for example, rather than someone who simply looks like her. “The sort of people that this board should be bringing back are people who are thinking about responsible or trustworthy technology, and safety,” says Kay Firth-Butterfield, executive director of the Centre for Trustworthy Technology. “There are a lot of women out there who are experts in that particular field.”
As OpenAI searches for new board members, it may meet resistance from prospects wary of the real power dynamics within the company. There are already concerns about tokenization. “I just feel like the person on the board would have a horrible time because they will constantly be fighting an uphill battle,” says Gebru. “Used as a token and not to really make any kind of difference.”
She’s not the only person within the world of AI ethics to question whether new board members would be marginalized. “I wouldn’t touch that board with a ten-foot pole,” Luccioni says. She feels she couldn’t recommend a friend take that sort of position, either. “Such stress!”
Meredith Whittaker, president of messaging app Signal, sees value in bringing someone to the board who isn’t just another startup founder, but she doubts that adding a single woman or person of color will set them up to affect meaningful change. Unless the expanded board is able to genuinely challenge Altman and his allies, packing it with people who tick off demographic boxes to satisfy calls for diversity could amount to little more than “diversity theater.”
“We’re not going to solve the issue—that AI is in the hands of concentrated capital at present—by simply hiring more diverse people to fulfill the incentives of concentrated capital,” Whittaker says. “I worry about a discourse that focuses on diversity and then sets folks up in rooms with [expletive] Larry Summers without much power.”
3 notes
·
View notes
Video
tumblr
Battlecode is MIT’s long-running programming competition. Every January, participants from around the world to write code to program entire armies – not just individual bots – before they duke it out on screen. Throughout Independent Activities Period, participants learn to use artificial intelligence, pathfinding, distributed algorithms, and more to make the best possible team strategy. 👾
Learn more here → https://news.mit.edu/2023/robot-armie...
and here → Battlecode
12 notes
·
View notes
Text
What is ais token?
ais token for investment
AIS token is one of the artificial intelligence tokens that has entered the arena for the advancement of artificial intelligence and robotics science, as well as linking digital currency to artificial intelligence science as much as possible.
The purpose of ais token is to invest in superior artificial intelligence projects, which have the potential for improvement. ais token's team chooses artificial intelligence projects with the opinion of experts and invests in them. Ais token is also used as a trading currency and its market cap is added day by day.

Therefore, many activists in the field of digital currency choose ais token to invest in the field of artificial intelligence.
What are the criteria for choosing an artificial intelligence token for investment?
When it comes to choosing an artificial intelligence (AI) token for investment, there are several criteria you can consider. Here are a few important factors:
1. Technology and Team: Evaluate the underlying technology of the AI token project and assess its potential for innovation and practicality. Look for a strong team with expertise in both AI and blockchain, as this combination is crucial for success.
2. Use Case and Market Potential: Consider the specific use case that the AI token aims to address. Assess the market potential and demand for the application of AI in that particular industry. Understanding the problem being solved and the potential impact is essential.
3. Partnerships and Collaborations: Investigate if the AI token project has formed strategic partnerships or collaborations with reputable organizations or established players in the industry. Partnerships can provide validation and open doors for wider adoption.
4. Roadmap and Progress: Examine the project's roadmap to understand its long-term goals and milestones. Assess the progress made so far, including the development of the technology, any achieved milestones, and the overall execution of the project.
5. Tokenomics: Analyze the token economics and distribution model of the AI token. Look into factors such as token supply, allocation, and potential token utility within the ecosystem. Additionally, consider if there are any mechanisms in place to incentivize participation and support the growth of the ecosystem.
6. Community and Reputation: Evaluate the project's community engagement, social media presence, and reputation within the crypto space. A strong community signals active interest and support for the project.
7. Regulatory Compliance: Ensure that the AI token project is compliant with relevant regulations and guidelines. This factor is increasingly important as regulatory frameworks around cryptocurrencies and tokens evolve. Remember, investing in AI tokens, like any investment, carries risks. It is crucial to conduct thorough research, diversify your investments, and consider seeking advice from financial professionals before making any investment decisions.
#AIS#ai#ai_token#artificial_intelligence#ai technology#ai tools#chatgpt#machine learning#generative ai#crypto
2 notes
·
View notes
Text
What are the latest warehouse automation technologies?
Gone are the days of manual labour and static, inefficient operations. Today, we stand at the forefront of a revolution driven by the latest warehouse automation technologies. These innovations reshape how businesses handle inventory, fulfil orders, and optimize supply chains.
From autonomous robots and artificial intelligence to the Internet of Things (IoT) and advanced data analytics, we'll explore how these technologies enhance efficiency, reduce costs, and ensure seamless operations in modern warehouses.
1-Robotic Process Automation (RPA): RPA involves using software robots to automate repetitive tasks like data entry, order processing, and inventory tracking. The robots interact with various systems and applications to streamline workflows.
2-Autonomous Mobile Robots (AMRs): Robotic vehicles called AMRs navigate and operate in warehouses without fixed infrastructure, such as conveyor belts or tracks. They perform tasks like picking, packing, and transporting goods.
3-Automated Guided Vehicles (AGVs): AGVs are similar to AMRs but typically follow fixed paths or routes guided by physical markers or magnetic tape. They are commonly used for material transport in warehouses and distribution centres.
4-Goods-to-Person Systems: This approach involves bringing the items to the workers rather than having workers travel throughout the warehouse to pick items. Automated systems retrieve and deliver goods to a workstation, reducing walking time and improving efficiency.
5-Automated Storage and Retrieval Systems (AS/RS): AS/RS systems use robotics to store and retrieve items from racks or shelves automatically. These systems can significantly increase storage density and optimize space utilization.
6-Collaborative Robots (Cobots): Cobots are designed to work alongside human workers. They can assist with tasks like picking, packing and sorting, enhancing efficiency and safety.
7-Warehouse Management Systems (WMS): While not a physical automation technology, modern WMS software uses advanced algorithms and AI to optimize inventory management, order fulfilment, and warehouse processes.
8-Vision Systems and Machine Learning: Computer vision technology combined with machine learning can be utilized for tasks such as object recognition, inventory movement tracking, and quality control.
9-IoT and Sensor Networks: Internet of Things (IoT) devices and sensors collect real-time data on inventory levels, environmental conditions, equipment health, and more, enabling better decision-making and predictive maintenance.
10-Voice and Wearable Technologies: Wearable devices and voice-guided picking systems can provide workers with real-time information and instructions, improving accuracy and efficiency.11-Automated Packaging Solutions: These systems automate the packaging process by selecting the appropriate box size, sealing packages, and applying labels, reducing manual labour and ensuring consistent packaging quality.

1 note
·
View note