Don't wanna be here? Send us removal request.
Link
Wolfram|Alpha has been integrated into ChatGPT. You have to be a ChatGPT Plus user and install the Wolfram plugin from within ChatGPT. With it, you can ask questions like "How far is it from Tokyo to Chicago?" or "What is the integral of x^2*cos(2x)" and, instead of trying to answer the question *linguistically*, ChatGPT will realize it needs to invoke Wolfram|Alpha and pass the question to Wolfram|Alpha for a computational answer.
The article shows some of the behind-the-scenes communication between ChatGPT and Wolfram|Alpha. ChatGPT doesn't just cut-and-paste in either direction. Rather, it turns your question or into a Wolfram|Alpha query, and then re-translates-back the answer into natural language. ChatGPT can incorporate graphs from Wolfram|Alpha into its presentation as well.
"ChatGPT isn't just using us to do a 'dead-end' operation like show the content of a webpage. Rather, we're acting much more like a true 'brain implant' for ChatGPT -- where it asks us things whenever it needs to, and we give responses that it can weave back into whatever it's doing."
"While 'pure ChatGPT' is restricted to things it 'learned during its training', by calling us it can get up-to-the-moment data."
This can be based on our real-time data feeds ("How much warmer is it in Timbuktu than New York now?"), or it can be based on 'science-style' predictive computations ("How far is it to Jupiter right now?").
0 notes
Link
AI tries 20 Jobs. Software engineer, doctor, graphic designer, therapist, stand-up comedian, lawyer, news anchor, bartender, screenwriter, music producer, journalist (product reviewer), copywriter, personal-trainer, DJ (disco), voice actor, influencer, chef (Indian), translator (English to Korean), and firefighter. Spoiler: It couldn't do "firefighter" or "circus artist" at all. For a lot of the others, it could do a bad job, not good enough to compete with a professional human. The one AI did best with was possibly "software engineer".
Back in the early 2000s, when we first started the future salons, we'd talk a lot about a future where AI could do all jobs. It's interesting to see that future creeping up on us bit by bit.
A lot of people would argue AI will never be able to do all jobs because AI can't do "creativity". But all those routine jobs stocking shelves at the grocery store -- those'll get automated right away! The interesting thing is that it's been the other way around -- it's the creative jobs like art and writing that are most threatened. (And software engineer.) Physical jobs like stocking shelves -- or firefighter or circus artist -- AI/robotics is nowhere near automating those. It looks like the physical jobs will be the last to get automated.
It's not routine-vs-creative that matters, it's mental-vs-physical.
AI is taking over "mental" jobs first, creative or not.
0 notes
Link
"OpenXLA is available now to accelerate and simplify machine learning."
XLA stands for "accelerated linear algebra", and it's the compiler that Google has been using with their custom hardware for AI, called tensor processing units (TPUs). It hasn't been open source until now. Now it's open-source and...
"co-developed by AI/ML industry leaders including Alibaba, Amazon Web Services, AMD, Apple, Arm, Cerebras, Google, Graphcore, Hugging Face, Intel, Meta, and NVIDIA."
Quite a list of companies. What exactly does it do?
"It enables developers to compile and optimize models from all leading ML frameworks for efficient training and serving on a wide variety of hardware. Developers using OpenXLA will see significant improvements in training time, throughput, serving latency, and, ultimately, time-to-market and compute costs."
They go on to further describe their motivation for creating OpenXLA:
"As model parameter counts grow exponentially and compute for deep learning models doubles every six months, developers seek maximum performance and utilization of their infrastructure. Teams are leveraging a wider array of hardware from power-efficient ML ASICs in the datacenter to edge processors that can deliver more responsive AI experiences."
"Without a common compiler to bridge these diverse hardware devices to the multiple frameworks in use today (e.g. TensorFlow, PyTorch), significant effort is required to run ML efficiently; developers must manually optimize model operations for each hardware target. This means using bespoke software libraries or writing device-specific code, which requires domain expertise. The result is isolated, non-generalizable paths across frameworks and hardware that are costly to maintain, promote vendor lock-in, and slow progress for ML developers."
They go on to say that the OpenXLA Project's core pillars are surprise, fear, ruthless efficiency, and almost fanatical devotion to the Pope -- wait, no, that's the Spanish Inquisition. Nobody expects the Spanish Inquisition! The OpenXLA Project's core pillars are performance, scalability, portability, flexibility, and extensibility for users.
So the idea is that input is in the form of PyTorch, JAX, or TensorFlow code, it goes into StableHLO ("HLO" stands for "high-level operations"), which outputs OpenXLA-formated code, which goes into a target-independent optimizer, then hardware-specific optimizer, and then you run it on your hardware.
The target-independent optimizer does such things as simplification of algebraic expressions, optimization of in-memory data layout, and scheduling optimizations, to minimize for example peak memory usage and peak communication needed. The hardware-specific optimizer generates code for specific hardware including NVIDIA GPUs, AMD GPUs, x86 and ARM CPU architectures, Google tensor processing units (TPUs), AWS Trainium, AWS Inferentia (hardware optimized for training and inference, respectively), Graphcore intelligence processing units (IPUs -- Graphcore's term), and Cerebras's Wafer-Scale Engine (ginormous AI wafers with everything on one wafer).
0 notes
Link
"Google DeepMind researcher co-authors paper saying AI will eliminate humanity." Do they really?
"The paper, published last month in the peer-reviewed AI Magazine, is a fascinating one that tries to think through how artificial intelligence could pose an existential risk to humanity by looking at how reward systems might be artificially constructed."
"What the new paper proposes is that at some point in the future, an advanced AI overseeing some important function could be incentivized to come up with cheating strategies to get its reward in ways that harm humanity."
"Cheating" to get a reward sounds like addiction to me, but they hypothesize the route the AI would take would involve eliminate potential threats and gaining control of "all available energy" to secure control over its reward.
"Under the conditions we have identified, our conclusion is much stronger than that of any previous publication -- an existential catastrophe is not just possible, but likely."
"If you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win."
That seems to hard to argue with. That's like me playing Stockfish. Not even AlphaZero.
"And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."
I had a look at the actual paper, and most of it concerns itself with "distal" and "proximal" probabilities. They model the system as an AI agent interacting with a computer that simulates the environment, and in the "distal" case, the computer that simulates the environment gives the agent a reward number after each of its actions, and in the "proximal" case, the computer's output is displayed on a screen, and an OCR system reads the screen and gives the agent the number. Which is a fancy way of saying the agents feedback from the environment is noisy. The agent has to act under uncertainty. It may sound like a pedantic difference, but they hypothesize that the agent will actually formulate different goals in the two scenarios. And the "distal" goal is what we want the agent to actually formulate as its goal and achieve. But the "distal" goal is what the agent will actually adopt as its goal and it will achieve something unintended, with perhaps disastrous consequences.
One of the annoying things I have noticed about being human is having to act under conditions of uncertainty, over and over. For whatever that's worth.
0 notes
Link
"Part of the mystique of artificial neural networks is that they seem to subvert traditional machine learning theory, which leans heavily on ideas from statistics and probability theory. In the usual way of thinking, machine learning models [...] work best when they have just the right number of parameters."
Too few parameters and the model is too simple, but too many parameters and the model is essentially memorizing its training data, rather than generalizing. This is called "overfitting". "By all accounts, deep neural networks like VGG have way too many parameters and should overfit. But they don't. Instead, such networks generalize astoundingly well to new data -- and until recently, no one knew why."
"Now, the mathematical equivalence of kernel machines and idealized neural networks is providing clues to why or how these over-parameterized networks arrive at (or converge to) their solutions. Kernel machines are algorithms that find patterns in data by projecting the data into extremely high dimensions. By studying the mathematically tractable kernel equivalents of idealized neural networks, researchers are learning why deep nets, despite their shocking complexity, converge during training to solutions that generalize well to unseen data."
0 notes
Link
"Samsung Electronics, a world leader in advanced semiconductor technology, today shared a new insight that takes the world a step closer to realizing neuromorphic chips that can better mimic the brain."
"Envisioned by the leading engineers and scholars from Samsung and Harvard University, the insight was published as a Perspective paper, titled 'Neuromorphic electronics based on copying and pasting the brain'."
"The essence of the vision put forward by the authors is best summed up by the two words, 'copy' and 'paste'. The paper suggests a way to copy the brain's neuronal connection map using a breakthrough nanoelectrode array developed by Dr. Ham and Dr. Park, and to paste this map onto a high-density three-dimensional network of solid-state memories, the technology for which Samsung has been a world leader."
"The nanoelectrode array can effectively enter a large number of neurons so it can record their electrical signals with high sensitivity. These massively parallel intracellular recordings inform the neuronal wiring map, indicating where neurons connect with one another and how strong these connections are. Hence from these telltale recordings, the neuronal wiring map can be extracted, or 'copied'."
"A network of specially-engineered non-volatile memories can learn and express the neuronal connection map, when directly driven by the intracellularly recorded signals."
0 notes
Link
"A brain mechanism that enables mice to override their instincts based on previous experience" has been discovered. "The ventral lateral geniculate nucleus (vLGN) could control escape behaviour depending on the animal's knowledge gained through previous experience, and on its assessment of risk in its current environment. When mice were not expecting a threat and felt safe, the activity of a subset of inhibitory neurons in the vLGN was high, which in turn could inhibit threat reactions. In contrast, when mice expected danger, activity in these neurons was low, which made the animals more likely to escape and seek safety."
The researchers used optogenetics to artificially activate the vLGN, determining that it is the vLGN that abolishes escape responses in the mice. Likewise, suppressing its activity lowers the threshold for escape and increases risk-avoidance behavior.
"The next piece of the puzzle the researchers are focusing on is determining which other brain regions the vLGN interacts with to achieve this inhibitory control of defensive reactions. They have already identified one such brain region, the superior colliculus in the midbrain." Apparently in mice the the superior colliculus in the midbrain is associated with visual threats. Apparently the way the researchers frightened these animals was by making rapidly expanding overhead dark spots, something that mimics the sight of an oncoming aerial predator and innately triggers a fast defensive response in mice.
0 notes
Link
An AI won the American Crossword Puzzle Tournament. "Checkers, backgammon, chess, Go, poker, and other games have witnessed the machines' invasions, falling one by one to dominant AIs. Now crosswords have joined them." "But a look at how Dr. Fill pulled off this feat reveals much more than merely the latest battle between humans and computers."
This year Matt Ginsberg, the computer scientist who created Dr. Fill, teamed up with the Berkeley Natural Language Processing Group, and they made "a hybrid system in which the Berkeley group's neural-net methods for interpreting clues worked in tandem with Ginsberg's code for efficiently filling out a crossword grid."
The article describes how Ginsberg's software methodically tests clues gleaned from a massive database of millions of previous crossword puzzles. It doesn't say much about the neural networks, but presumably the neural networks improve this "guessing" process and bring candidates to the top that otherwise wouldn't be there.
0 notes
Link
"Machine learning algorithm revolutionizes how scientists study behavior." Hmm what do they mean by "behavior"?
"Previously, the standard method to capture animal behavior was to track very simple actions, like whether a trained mouse pressed a lever or whether an animal was eating food or not. Alternatively, the experimenter could spend hours and hours manually identifying behavior, usually frame by frame on a video, a process prone to human error and bias."
"Hsu, a biological sciences PhD candidate, realized he could let an unsupervised learning algorithm do the time-consuming work. B-SOiD discovers behaviors by identifying patterns in the position of an animal's body. The algorithm works with computer vision software and can tell researchers what behavior is happening at every frame in a video."
"It uses an equation to consistently determine when a behavior starts. Once you reach that threshold, the behavior is identified, every time."
How does it do all this? It's a combination of a clustering algorithm and a classifier. Well, first you employ a pose estimation algorithm (this system doesn't reinvent pose estimation software, it just uses existing pose estimation software). Then...
"B-SOiD extracts the spatiotemporal relationships between all position inputs (speed, angular change, and distance between tracked points). After embedding these high-dimensional measurements into a low-dimensional space UMAP, a state-of-the-art dimensionality reduction algorithm, a hierarchical clustering method, HDBSCAN, is used to extract dense regions separated by sparse regions. Although defining clusters in low-dimensional spaces is largely sufficient to achieve the desired behavioral identification, doing so is a computationally expensive process. Additionally, behavioral transference in the low-dimensional space is difficult to evaluate, owing partly due to the non-linearity in dimensionality reduction. To overcome both of these issues, we utilized a machine learning classifier that learns to predict behaviors based on the high dimensional measurements. This approach provides greatly improved computational speed (processing time for one hour of 60fps data containing six poses is under five minutes with a 128GB RAM CPU) and a consistent model that enables generalization across data sets within or across labs. Because the classifier is trained to partition pose relationships, not their low-dimensional representations, the defined clusters are further apart from one another, greatly improving consistency over statistical embedding methods (for unsupervised behavioral metrics comparing high vs. low-dimensional behavioral representation). Finally, to improve functionality, we have increased accessibility -- formatting the code into a downloadable app which provides an intuitive, step-by-step user interface."
1 note
·
View note
Link
The machine learning technique for making "embeddings", such as the Word2Vec word "embedding" system, has been adapted for electronic medical records.
"Essentially, a computer was programmed to scour through millions of electronic health records and learn how to find connections between data and diseases. This programming relied on 'embedding' algorithms that had been previously developed by other researchers, such as linguists, to study word networks in various languages. One of the algorithms, called word2vec, was particularly effective. Then, the computer was programmed to use what it learned to identify the diagnoses of nearly 2 million patients whose data was stored in the Mount Sinai Health System."
"Phe" stands for "phenotype", in case you were wondering.
0 notes
Link
Vision through murky water. You need a polarized light camera, but otherwise the process is automated. "Traditional approaches to underwater imaging use either prior knowledge of the imaging area or the background of an image to calculate and remove scattered light. These methods have limited utility in the field because they typically require manual processing, images do not always have visible backgrounds, and prior information is not always available."
"To overcome these challenges, the researchers combined a traditional polarized imaging setup with a new algorithm that automatically finds the optimal parameters to suppress the scattering light. This not only significantly improves image contrast to achieve clear imaging but can be used without any prior knowledge of the imaging area and for images with or without background regions."
This system doesn't use any machine learning; it's all regular old-fashioned physics and math.
0 notes
Link
"EpyNN is a production-ready but first Educational Python resource for Neural Networks. EpyNN is designed for Supervised Machine Learning (SML) approaches by means of Neural Networks."
"EpyNN includes scalable, minimalistic and homogeneous implementations of major Neural Network architectures in pure Python/Numpy."
"EpyNN is intended for teachers, students, scientists, or more generally anyone with minimal skills in Python programming who wish to understand and build from basic implementations of NN architectures."
"EpyNN has been cross-validated against TensorFlow/Keras API and provides identical results for identical configurations in the limit of float64 precision."
0 notes
Link
The Machine & Deep Learning Compendium. This has been on Github for the last 4 years but now has its own website. "The Compendium includes around 500 topics, that contains various summaries, links, and articles that I have read on numerous topics that I found interesting or that I had needed to learn. It include the majority of modern machine learning algorithms, statistics, feature selection, and engineering techniques, deep-learning, NLP, audio, deep & classic vision, time-series, anomaly detection, graphs, experiment management, and much more. In addition to strategic topics such as data science management and team building, and essential topics such as product management, product design, and a technology stack from a data science point of view."
0 notes
Link
Simple PCR amplification of environmentally collected DNA ("eDNA") is sufficient to detect the dispersion of genetically modified animals. Animals leave bits of their DNA in food (from saliva) and in feces and urine, and this DNA left in the environment can be collected and analyzed. Standard PCR machines in this experiment were used to detect DNA from genetically modified laboratory fruit flies, DNA from genetically modified laboratory mice, and DNA from aquarium water used by genetically modified laboratory tetra fish.
0 notes
Link
Artificial camouflage for robots. By 'artificial camouflage' here, they are talking about imitating the camouflage of animals that can dynamically change their camouflage as the move about their environment, for example cephalopods.
The system developed here uses something called thermochromic liquid crystal. "Thermochromic" just means something changes color depending on temperature. So the key is controlling temperature which is done by stacking multilayer silver nanowire heaters. Being able to accurately control all of the nanowires allows fine patterns to be expressed.
It occurred to me that while this might be good for visible light camouflage, a military enemy with night vision equipment is easily going to see the heat due to the infrared light it produces, which far from providing camouflage will make the thing you're trying to hide pop out.
Check out the picture with the "chameleon" robot partially on red, green, and blue backgrounds and showing how it can change color to blend with the backgrounds.
0 notes
Link
"Historically, Macintosh computers running macOS have been popular with brain imaging scientists. Since macOS runs Unix, users can often use the same tools on their laptop as the Linux workstations and super computers used for large datasets. However, this trend may change. First, the new Windows Subsystem for Linux (WSL) has allowed Windows computers to seamlessly run Unix tools. Second, recent Macintosh computers have switched from Intel x86-64 processors to the ARM-based M1 Apple Silicon."
"Unless you are a developer, I would strongly discourage scientists from purchasing an Apple Silicon computer in the short term. Productive work will require core tools to be ported. In the longer term, this architecture could have a profound impact on science. In particular if Apple develops servers that exploit the remarkable power efficiency of their CPUs (competing with AWS Graviton) and leverage the Metal language and GPUs for compute tasks (competing with NVidia's Tesla products and CUDA language)."
0 notes
Link
Is Hacker News a good predictor of future tech trends? Bitcoin - yes, a lot; serverless - yes, a little; Tesla - no; NFT's - yes; machine learning - yes; remote work - no, not at all; 3D printing - no; drones - yes; apps - no; Android - yes, a little.
"Hacker News is typically ahead of the mainstream, often by a few years, but you would need to be paying very close attention to catch the early mentions of a new tech trend.
0 notes