#heuristic search and function
Explore tagged Tumblr posts
transmutationisms · 4 months ago
Note
forgive me if I'm being obtuse, but isn't every medical diagnosis an artifact of human taxonomic schemes? I know I'm not treading new ground here and that diseases/medical conditions aren't like, drawn from thin air in the way a lot of psychiatric conditions are i suppose it just confuses me a bit
no, & this is ancillary in some ways to what i'm actually criticising about psychiatry. it's true there are non-psychiatric medical diagnoses that work analogously to psychiatric ones: think ME/CFS, hEDS, fibromyalgia, most things that have 'idiopathic' in the name. these are names given to clusters of symptoms, like the way that psychiatric labels are just names for a certain set of behaviours. we don't know what causes these issues, though people have various theories and there is (a varying amount of) research ongoing that aims to find the etiologies.
however, that's not the case for all non-psychiatric diagnoses. think about a viral or bacterial infection, a torn ACL, or Down syndrome. these are diagnoses that do refer to specific infectious agents, anatomical problems, genetic variants, and so forth. that doesn't mean the diagnosis is always easy to make, or that it's always made correctly, but it does mean that when you are diagnosed with one of these problems, a specific cause is being identified (& sometimes they might even be right). it's not just a convenient shorthand name for a group of symptoms, even though of course, most things that are diagnosed are done so because they cause and are associated with symptoms. (most but not all lol.)
psychiatry is distinct as a discipline in that all of its diagnoses function the first way i described. they are not referring to disease entities or processes; there is no credible hypothesis for a biological etiology. why? fundamentally, because the psychiatric diagnoses generally exist to pathologise socially unwanted behaviour: the taxonomy is a reflection of a political agenda and the priorities of clinicians. it's not even really an adequate framework for grouping patients together, because you get placed in a category based only on, again, external manifestations (behaviours). who says any two people who hallucinate or cut themselves are doing it for the exact same reasons? well, no one, because again, even getting the same psych diagnosis doesn't indicate anything about an actual etiology or underlying biological process or anything. there is no referent; the psychiatric diagnosis is only defined heuristically and circularly.
many people are confused by this because, in both popular and professional discourse, psychiatric diagnoses are consistently spoken about as though they DO refer to an underlying discoverable disease or disease process. despite hundreds of years of looking for such things, psychiatrists are yet to find any, and if they did, the condition in question would be reassigned to the relevant medical specialty, because psychiatrists also cannot treat infectious agents, anatomical problems, harmful genetic variants, and so on. (when i worked as a bibliographer we used to have extremely funny arguments over whether materials pertaining to the psychiatric search for biological disease processes should be categorised under psychiatry, neuroscience, medicine general, philosophy of medicine, 'science and society,' or just 'controversies and disputes' with no real subject label.)
to be clear, when i say psychiatric diagnoses aren't referring to known or discoverable disease processes, that's not a moral indictment. it's not an inherently bad diagnostic process, provided the patient understands that is what the process actually is. sometimes we just don't know yet what we're dealing with; sometimes a heuristic diagnostic label is just a way of billing insurance for a treatment that we know helps some similar patients, even if we don't know why.
however, with psychiatric diagnoses, evidence for such efficacy is widely lacking and often even negative; this is fundamentally because psychiatric diagnoses are not formulated on the basis of patient needs but on the basis of employer and state needs to cultivate a productive workforce and by corollary enforce a notion of mental 'normality.' all medicine under capitalism has a biopolitical remit; psychiatry has only a biopolitical remit. it has never at any point succeeded in making diagnoses that refer to demonstrable disease processes, because that's definitionally not even under its purview. these diagnoses have never been satisfactorily shown to be related to any disease process—and why should we expect that? historically, that's not what they exist for; it's not the problem they were invented to solve. they are social technologies; they're not illnesses.
720 notes · View notes
kafkaoftherubble · 1 year ago
Text
试问:这是啥?//Question: what is this?
Tumblr media
and I only just remembered about these, but I kept seeing these turtle dragon statues around. I need to research them more but honestly, they're so cute I might just call them my 6th favorite type
@secondhanddragon So, you wanted to know what this is?
I went to search for their information and the creature with the highest probability (so high my confidence level is at 99%) of matching the above is...
金鳌,或称龙龟
Golden Ao, also known as Dragon Turtle
Tumblr media
It's classified as a divine turtle (神龟)rather than a dragon; my guess is that since it is not one of the Dragon's Sons, it doesn't count in a traditional bestiary.
It has the shell of a turtle but the body of a dragon, and once again, it's useful in Fengshui. It symbolizes power and wealth.
-----------------------
However! This is not the only turtle-dragon in Chinese mythology. In fact, my Brain's first thought—though we kinda immediately discarded it—was the other one...
赑屃,或称霸下、龟趺
Bixi, also known as Ba Xia ("Dominate Below") and Gui Fu ("Foot Tortoise")
Tumblr media
Now this is one of the Dragon's Nine Sons. Bixi is strong; the word 赑 basically means "able to support great weight," but it's not a common word to be used in any stretch. It's the son of the dragon and a tortoise, and later depictions of Bixi gradually show characteristics of a dragon more than its earlier, clearly-tortoise-like features.
How did you rate your confidence level? Why are you sre that the image I showed you was a Dragon Turtle and not a Bixi?
It's the function of Bixi that gave it away!
Bixi is always made to carry steles and important tablets on its back, especially the ones by emperors. The idea is that the deeds and/or names of those recorded on this stele/tablet will live on forever with Bixi, who has the steadfast longevity of a tortoise and the divine power of a dragon.
It possesses quite a few meanings:
power and status; hence its steles are often those of emperors, high chiefs, and whatnot.
longevity, since it is the son of a tortoise
local cultural symbol (local tribal importance)
and mythological symbolism (in Taoism)
One of the myths concerning Bixi is this: Bixi used to carry mountains on its back and wreak chaos through tidal waves. Then, one of China's ancient emperors, Da Yu, subjugated Bixi and made it work under him. Da Yu was known for his Great Flood project—he devoted much of his life to solving it, and in this mythologized version, Bixi was one of those divine creatures who helped*. When the project was over, Da Yu worried that Bixi would return to its old wayward ways, so he created this gigantic stone tablet detailing Bixi's contribution and great deeds and asked Bixi to carry it around. The weight of that tablet caused Bixi to be too sluggish to wreak havoc anymore.
*You might be interested to know that another one who helped Da Yu was "Ying Long (应龙)"—yes, this is a full-fledged dragon.
This is how I'm confident that your image was that of a dragon turtle and not a Bixi.
---
How do I distinguish between a Bixi and Black Tortoise (Xuan Wu 玄武)?
Same deal. It's the stele/tablet. Also, Bixi has teeth. Tortoises do not. Later Bixi has draconic features. Xuan Wu is always... a tortoise! Hahahaha.
----
So there you go. I'm most confident that it is a dragon turtle. However, Bixi has more stories to it due to being a Son of the Dragon.
As a rule of heuristics (not statistics; I did not collect data for this), if a Chinese myth creature is found as a small, delicate ornament, it's quite likely that it has connotations in Fengshui. It's the same for Pixiu and Qilin, and the same for dragon turtles too.
Thank you for reading my ramble! I hope it was fun and informative for you!
----
Citations and Links:
"Dragon Turtle" in Wikipedia.
"Bixi" in Wikipedia.
"金鳌 (中国古代神兽)" in Baidu Baike. (In Chinese)
"赑屃(中国古代神兽)" in Baidu Baike. (In Chinese)
Kafka's Notes: I generally don't like Baidu Baike because it lacks citational rigor, so it's hard to trace information to its source. However, it was very useful for me to search for names and preliminary information, so I included it here. Some of the stuff I mentioned (like the Legend of Bixi and Da Yu) was from Baidu Baike; I judged that a folklore is not always recorded in academic sources, so this instance of lacking citational rigor is acceptable.
15 notes · View notes
ylespar · 1 year ago
Text
Luke, David. (2007). The science of magic: A parapsychological model of psychic ability in the context of magical will. Journal for the Academic Study of Magic. 4. 90-119.
Abstract: This paper describes a parapsychological model of psychic ability in terms of its intrinsically magical undercurrent, thereby providing a bridge, hitherto largely unconnected, between science and magic. Initially proposed by Rex Stanford in the 1970’s, the model, seeks to explain the unconscious everyday use of ‘psi’ (precognition, telepathy, clairvoyance, or psychokinesis) as a means of serving the needs and desires of the organism. The model, termed ‘psi-mediated instrumental response’ (PMIR), is based on the principles and research of cognitive, behaviourist, and para- psychology from an evolutionary perspective. Yet it is shown that, by extrapolating the inferences of this model and by subtly re-orientating it to a magical perspective, it can serve as a useful psychology of magical operation. By drawing comparisons between Stanford’s model and the occult psychology of the chaos magic current, and with particular regard to the work of Austin Osman Spare, the essay highlights the parallels between these bodies of thought. While this demonstrates some synonymous mechanics for the manifestation of the magical desire it also offers a heuristic model for the functioning of magic that is compatible with mainstream cognitive science and which can be, and has been, tested empirically. Furthermore some consideration is given to scientific research’s magical nature, which has been unearthed in the process of searching for a science of magic. Despite objections from both magicians and scientists, by cross-pollinating the flowers of these two fields in this way possibilities emerge for the utilisation of empirical research to augment magical belief systems for those with a scientific leaning, whilst simultaneously illuminating new regions for growth in the formation of occult science.
4 notes · View notes
govindhtech · 1 month ago
Text
AlphaEvolve Coding Agent using LLM Algorithmic Innovation
Tumblr media
AlphaEvolve
Large language models drive AlphaEvolve, a powerful coding agent that discovers and optimises difficult algorithms. It solves both complex and simple mathematical and computational issues.
AlphaEvolve combines automated assessors' rigour with LLMs' creativity. This combination lets it validate solutions and impartially assess their quality and correctness. AlphaEvolve uses evolution to refine its best ideas. It coordinates an autonomous pipeline that queries LLMs and calculates to develop algorithms for user-specified goals. An evolutionary method improves automated evaluation metrics scores by building programs.
Human users define the goal, set assessment requirements, and provide an initial solution or code skeleton. The user must provide a way, usually a function, to automatically evaluate produced solutions by mapping them to scalar metrics to be maximised. AlphaEvolve lets users annotate code blocks in a codebase that the system will build. As a skeleton, the remaining code lets you evaluate the developed parts. Though simple, the initial program must be complete.
AlphaEvolve can evolve a search algorithm, the solution, or a function that creates the solution. These methods may help depending on the situation.
AlphaEvolve's key components are:
AlphaEvolve uses cutting-edge LLMs like Gemini 2.0 Flash and Gemini 2.0 Pro. Gemini Pro offers deep and insightful suggestions, while Gemini Flash's efficiency maximises the exploration of many topics. This ensemble technique balances throughput and solution quality. The major job of LLMs is to assess present solutions and recommend improvements. AlphaEvolve's performance is improved with powerful LLMs despite being model-agnostic. LLMs either generate whole code blocks for brief or completely changed code or diff-style code adjustments for focused updates.
Prompt Sample:
This section pulls programs from the Program database to build LLM prompts. Equations, code samples, relevant literature, human-written directions, stochastic formatting, and displayed evaluation results can enhance prompts. Another method is meta-prompt evolution, where the LLM suggests prompts.
Pool of Evaluators
This runs and evaluates proposed programs using user-provided automatic evaluation metrics. These measures assess solution quality objectively. AlphaEvolve may evaluate answers on progressively complicated scenarios in cascades to quickly eliminate less promising examples. It also provides LLM-generated feedback on desirable features that measurements cannot measure. Parallel evaluation speeds up the process. AlphaEvolve optimises multiple metrics. AlphaEvolve can only solve problems with machine-grade solutions, but its automated assessment prevents LLM hallucinations.
The program database stores created solutions and examination results. It uses an evolutionary algorithm inspired by island models and MAP-elites to manage the pool of solutions and choose models for future generations to balance exploration and exploitation.
Distributed Pipeline:
AlphaEvolve is an asynchronous computing pipeline developed in Python using asyncio. This pipeline with a controller, LLM samplers, and assessment nodes is tailored for throughput to produce and evaluate more ideas within a budget.
AlphaEvolve has excelled in several fields:
It improved hardware, data centres, and AI training across Google's computing ecosystem.
AlphaEvolve recovers 0.7% of Google's worldwide computer resources using its Borg cluster management system heuristic. This in-production solution's performance and human-readable code improve interpretability, debuggability, predictability, and deployment.
It suggested recreating a critical arithmetic circuit in Google's Tensor Processing Units (TPUs) in Verilog, removing unnecessary bits, and putting it into a future TPU. AlphaEvolve can aid with hardware design by suggesting improvements to popular hardware languages.
It sped up a fundamental kernel in Gemini's architecture by 23% and reduced training time by 1% by finding better ways to partition massive matrix multiplication operations, increasing AI performance and research. Thus, kernel optimisation engineering time was considerably reduced. This is the first time Gemini optimised its training technique with AlphaEvolve.
AlphaEvolve optimises low-level GPU operations to speed up Transformer FlashAttention kernel implementation by 32.5%. It can optimise compiler Intermediate Representations (IRs), indicating promise for incorporating AlphaEvolve into the compiler workflow or adding these optimisations to current compilers.
AlphaEvolve developed breakthrough gradient-based optimisation processes that led to novel matrix multiplication algorithms in mathematics and algorithm discovery. It enhanced Strassen's 1969 approach by multiplying 4x4 complex-valued matrices with 48 scalar multiplications. AlphaEvolve matched or outperformed best solutions for many matrix multiplication methods.
When applied to over 50 open mathematics problems, AlphaEvolve enhanced best-known solutions in 20% and rediscovered state-of-the-art solutions in 75%. It improved the kissing number problem by finding a configuration that set a new lower bound in 11 dimensions. Additionally, it improved bounds on packing difficulties, Erdős's minimum overlap problem, uncertainty principles, and autocorrelation inequalities. These results were often achieved by AlphaEvolve using problem-specific heuristic search strategies.
AlphaEvolve outperforms FunSearch due to its capacity to evolve across codebases, support for many metrics, and use of frontier LLMs with rich context. It differs from evolutionary programming by automating evolution operator creation via LLMs. It improves artificial intelligence mathematics and science by superoptimizing code.
One limitation of AlphaEvolve is that it requires automated evaluation problems. Manual experimentation is not among its capabilities. LLM evaluation is possible but not the major focus.
AlphaEvolve should improve as LLMs code better. Google is exploring a wider access program and an Early Access Program for academics. AlphaEvolve's broad scope suggests game-changing uses in business, sustainability, medical development, and material research. Future phases include reducing AlphaEvolve's performance to base LLMs and maybe integrating natural-language feedback approaches.
0 notes
digitalmore · 2 months ago
Text
0 notes
drmikewatts · 3 months ago
Text
IEEE Transactions on Evolutionary Computation, Volume 29, Issue 2, April 2025
1) Knowledge Structure Preserving-Based Evolutionary Many-Task Optimization
Author(s): Yi Jiang, Zhi-Hui Zhan, Kay Chen Tan, Sam Kwong, Jun Zhang
Pages: 287 - 301
2) Bayesian Optimization for Quality Diversity Search With Coupled Descriptor Functions
Author(s): Paul Kent, Adam Gaier, Jean-Baptiste Mouret, Juergen Branke
Pages: 302 - 316
3) Machine Learning-Assisted Multiobjective Evolutionary Algorithm for Routing and Packing
Author(s): Fei Liu, Qingfu Zhang, Qingling Zhu, Xialiang Tong, Mingxuan Yuan
Pages: 317 - 330
4) LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics
Author(s): Niki van Stein, Thomas Bäck
Pages: 331 - 345
5) Genetic Programming Hyper Heuristic With Elitist Mutation for Integrated Order Batching and Picker Routing Problem
Author(s): Yuquan Wang, Naiming Xie, Nanlei Chen, Hui Ma, Gang Chen
Pages: 346 - 359
6) Genetic Multi-Armed Bandits: A Reinforcement Learning Inspired Approach for Simulation Optimization
Author(s): Deniz Preil, Michael Krapp
Pages: 360 - 374
7) A Novel Knowledge-Based Genetic Algorithm for Robot Path Planning in Complex Environments
Author(s): Junfei Li, Yanrong Hu, Simon X. Yang
Pages: 375 - 389
8) A Biased Random Key Genetic Algorithm for Solving the Longest Common Square Subsequence Problem
Author(s): Jaume Reixach, Christian Blum, Marko Djukanović, Günther R. Raidl
Pages: 390 - 403
9) Decoupling Constraint: Task Clone-Based Multitasking Optimization for Constrained Multiobjective Optimization
Author(s): Genghui Li, Zhenkun Wang, Weifeng Gao, Ling Wang
Pages: 404 - 417
10) Protein Design by Directed Evolution Guided by Large Language Models
Author(s): Thanh V. T. Tran, Truong Son Hy
Pages: 418 - 428
11) Multiform Genetic Programming Framework for Symbolic Regression Problems
Author(s): Jinghui Zhong, Junlan Dong, Wei-Li Liu, Liang Feng, Jun Zhang
Pages: 429 - 443
12) Causal Inference-Based Large-Scale Multiobjective Optimization
Author(s): Bingdong Li, Yanting Yang, Peng Yang, Guiying Li, Ke Tang, Aimin Zhou
Pages: 444 - 458
13) A Bidirectional Differential Evolution-Based Unknown Cyberattack Detection System
Author(s): Hanyuan Huang, Tao Li, Beibei Li, Wenhao Wang, Yanan Sun
Pages: 459 - 473
14) MOTEA-II: A Collaborative Multiobjective Transformation-Based Evolutionary Algorithm for Bilevel Optimization
Author(s): Lei Chen, Yiu-Ming Cheung, Hai-Lin Liu, Yutao Lai
Pages: 474 - 489
15) From Direct to Directional Variable Dependencies—Nonsymmetrical Dependencies Discovery in Real-World and Theoretical Problems
Author(s): Michal Witold Przewozniczek, Bartosz Frej, Marcin Michal Komarnicki
Pages: 490 - 504
16) A Two-Individual Evolutionary Algorithm for Cumulative Capacitated Vehicle Routing With Single and Multiple Depots
Author(s): Yuji Zou, Jin-Kao Hao, Qinghua Wu
Pages: 505 - 518
17) Gradient-Guided Local Search for Large-Scale Hypervolume Subset Selection
Author(s): Yang Nan, Tianye Shu, Hisao Ishibuchi, Ke Shang
Pages: 519 - 533
18) Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap
Author(s): Xingyu Wu, Sheng-Hao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
Pages: 534 - 554
19) Zeroth-Order Actor–Critic: An Evolutionary Framework for Sequential Decision Problems
Author(s): Yuheng Lei, Yao Lyu, Guojian Zhan, Tao Zhang, Jiangtao Li, Jianyu Chen, Shengbo Eben Li, Sifa Zheng
Pages: 555 - 569
0 notes
renatoferreiradasilva · 3 months ago
Text
Spectral Operators, the Riemann Hypothesis, and the Structure of Mathematical Truth
1. Introduction
The Riemann Hypothesis (RH) is one of the most profound unsolved problems in mathematics, conjecturing that all nontrivial zeros of the Riemann zeta function ( \zeta(s) ) lie on the critical line ( \Re(s) = \frac{1}{2} ). Among the various approaches to proving RH, the Hilbert-Pólya conjecture posits that these zeros correspond to the eigenvalues of a self-adjoint operator ( H ). This paper synthesizes two complementary studies: one that explores the conceptual and epistemological foundations of spectral methods, and another that rigorously demonstrates the self-adjointness of a proposed differential operator of 12th order.
The fusion of these perspectives provides both a theoretical motivation for spectral approaches and a concrete mathematical framework supporting the existence of an operator whose eigenvalues may align with the imaginary parts of the zeta function's nontrivial zeros. The key contributions of this paper are:
Analyzing the role of spectral models in understanding RH.
Discussing the connections between RH, random matrix theory, and quantum chaos.
Demonstrating the self-adjointness of a proposed operator, providing a step toward the explicit realization of the Hilbert-Pólya conjecture.
By bridging these two studies, we aim to highlight the strengths and limitations of the spectral approach to RH.
2. The Spectral Perspective on the Riemann Hypothesis
2.1. The Hilbert-Pólya Conjecture and Spectral Models
The idea that RH could be proven via spectral analysis dates back to the Hilbert-Pólya conjecture, which suggests the existence of a self-adjoint operator ( H ) whose eigenvalues correspond to the imaginary parts of the nontrivial zeros of ( \zeta(s) ). Such an operator would satisfy:
[ H \psi_n = \lambda_n \psi_n, \quad \lambda_n = \Im(\rho_n), ]
where ( \rho_n ) are the nontrivial zeros of the zeta function. For this to be valid, ( H ) must be self-adjoint, ensuring its eigenvalues are real.
One of the most intriguing aspects of this conjecture is the connection to random matrix theory (RMT), where Montgomery's pair correlation conjecture suggests that the statistics of zeta zeros resemble those of the Gaussian Unitary Ensemble (GUE). This unexpected link between number theory and quantum mechanics has driven interest in the search for an operator that naturally reproduces these spectral properties.
2.2. The Epistemology of Proof and Falsifiability
While numerical evidence from computations by Odlyzko suggests that RH holds for billions of zeros, this does not constitute a proof. Mathematics requires deductive certainty, unlike empirical sciences where falsifiability plays a key role. Some have speculated that RH might be independent of Zermelo-Fraenkel set theory, emphasizing the need for a rigorous mathematical proof rather than heuristic arguments.
3. A Self-Adjoint Operator for the Zeta Function
3.1. Defining the Operator
In pursuit of an explicit realization of the Hilbert-Pólya conjecture, a 12th-order differential operator has been proposed:
[ H = -\frac{d^{12}}{dx^{12}} + V(x). ]
This operator is defined on the Hilbert space ( L^2(\mathbb{R}) ) with domain:
[ D(H) = H^{12}(\mathbb{R}) \cap L^2(\mathbb{R}). ]
where ( H^{12}(\mathbb{R}) ) denotes the Sobolev space of functions with 12 square-integrable derivatives. The potential function ( V(x) ) is assumed to be real, measurable, and exhibit controlled growth such that ( |V(x)| \leq C(1+|x|)^k ) with ( k < 12 ). These conditions ensure the mathematical well-posedness of ( H ).
3.2. Proof of Self-Adjointness
To establish self-adjointness, the following steps were taken:
Symmetry: It was shown that ( H ) is symmetric by integration by parts, assuming boundary conditions ensuring that derivatives up to the 11th order vanish at infinity.
Von Neumann's Criterion: By analyzing the deficiency spaces ( \mathcal{N}\pm ) associated with ( H ), it was proven that these spaces are trivial, implying that ( H ) is essentially self-adjoint.
Kato-Rellich and Wüst's Theorem: These theorems were used to ensure that the potential ( V(x) ) does not destroy the essential self-adjointness of ( H ).
With these steps, the operator ( H ) was rigorously proven to be self-adjoint in ( L^2(\mathbb{R}) ), a necessary condition for it to serve as a spectral model for RH.
4. Interdisciplinary Implications and Future Directions
4.1. Quantum Chaos and Physical Interpretations
The connection between RH and quantum chaos remains an open question. If the eigenvalues of ( H ) indeed align with the imaginary parts of the zeta zeros, it would suggest that the zeta function encodes the energy levels of an unknown quantum system. This aligns with Dyson's speculation that RH may be the spectral problem of a yet-undiscovered physical system.
4.2. Open Challenges
Several crucial challenges remain:
Does ( H ) produce eigenvalues corresponding to zeta zeros?
Can this framework be extended to general L-functions?
Is there an underlying geometric structure dictating the choice of the 12th-order operator?
These questions must be addressed before a spectral proof of RH can be considered complete.
5. Conclusion
This paper integrates theoretical insights with rigorous mathematical results to advance the spectral approach to RH. The first component emphasizes the conceptual foundations of spectral models, highlighting connections to RMT and the philosophy of mathematical proof. The second component provides a concrete step forward by demonstrating the existence of a self-adjoint operator that could potentially align with the zeta function's zeros.
While the full resolution of RH remains elusive, the establishment of self-adjointness in a high-order differential operator opens new avenues for exploration. Further work is required to determine whether this operator indeed encodes the spectral properties of the zeta function, bridging the gap between mathematical conjecture and rigorous proof.
References
Montgomery, H. L. (1973). The pair correlation of zeros of the zeta function.
Odlyzko, A. M. (1987). On the distribution of spacings between zeros of the zeta function.
Reed, M., Simon, B. (1972). Methods of Modern Mathematical Physics, Vol. II.
Connes, A. (1999). Noncommutative Geometry and the Zeros of the Riemann Zeta Function.
Dyson, F. J. (2009). Birds and Frogs: Mathematical Reflections.
Baez, J., Oliveira, R. (2003). Higher Order Operators in Spectral Theory.
0 notes
maacsatara · 6 months ago
Text
Tumblr media
AI in Game Design: Breathing Life into Virtual Worlds with Behavior and Pathfinding
Artificial intelligence (AI) has revolutionized the gaming landscape, transforming static and predictable virtual worlds into dynamic and engaging experiences.  No longer are we confined to battling repetitive enemies with pre-scripted movements. AI empowers game developers to create intelligent agents that can learn, adapt, and surprise players, leading to more immersive and challenging gameplay. This is why many aspiring game developers seek out a game design institute in Pune (or elsewhere) to learn the intricacies of incorporating AI into their creations. Two key areas where AI shines in game design are behavior and pathfinding.
Behavior:
At the heart of compelling game AI lies the ability to simulate believable behavior. This involves creating Non-Player Characters (NPCs) that act with a degree of autonomy and intelligence, reacting to the player and their environment in a realistic manner. AI techniques like finite state machines, behavior trees, and neural networks are used to achieve this.
Finite State Machines (FSMs): These are a classic approach to defining NPC behavior. An FSM consists of a set of states and transitions between them, triggered by events in the game world. For example, an enemy guard might have states like "patrol," "chase," and "attack," with transitions based on the player's proximity or actions. While simple to implement, FSMs can become complex and difficult to manage for intricate behaviors.
Behavior Trees: A more modular and flexible approach, behavior trees represent NPC behavior as a tree-like structure with nodes representing actions, conditions, and logical operators. This allows for more complex and nuanced behaviors, making it easier to create NPCs that react to a wider range of situations and stimuli.
Neural Networks: Inspired by the human brain, neural networks enable NPCs to learn and adapt their behavior over time. By training on vast amounts of data, they can develop sophisticated strategies and decision-making abilities, leading to more challenging and unpredictable opponents.
Pathfinding:
Navigating complex game environments is another crucial aspect of AI in games. Pathfinding algorithms allow NPCs to find the most efficient route to a target, avoiding obstacles and reacting dynamically to changes in the environment.
A Search:* One of the most popular pathfinding algorithms, A* uses a heuristic function to estimate the cost of reaching the goal from any given point, efficiently finding the shortest path while taking into account terrain, obstacles, and other factors.
Dijkstra's Algorithm: This algorithm guarantees the shortest path between two points by exploring all possible routes. While computationally more expensive than A*, it can be useful in situations where the exact shortest path is critical.
Navigation Meshes: These are pre-computed data structures that represent the navigable areas of a game environment. By simplifying the environment into a network of interconnected nodes, navigation meshes allow for faster and more efficient pathfinding, especially in complex 3D worlds.
Beyond Basic Movement:
AI-driven behavior and pathfinding go far beyond simply moving characters around a game world. They contribute to a more immersive and engaging player experience in several ways:
Dynamic Difficulty Adjustment: AI can analyze player behavior and adjust the difficulty of the game accordingly, providing a challenging but fair experience.
Emergent Gameplay: By giving NPCs the ability to react to the player and each other in unpredictable ways, AI can create emergent gameplay scenarios that are unique and unscripted.
Realistic Interactions: AI allows for more believable interactions between NPCs and their environment, creating a sense of immersion and realism.
Enhanced Storytelling: Intelligent NPCs can contribute to richer and more engaging narratives, reacting to the player's choices and driving the story forward.
The Future of AI in Game Design:
As AI technology continues to advance, we can expect even more sophisticated and immersive experiences in the future.  Reinforcement learning, where agents learn through trial and error, holds immense potential for creating truly intelligent NPCs that can adapt and evolve in response to player actions. Procedural content generation, powered by AI, can create vast and varied game worlds with unique challenges and experiences.
In conclusion, AI is transforming the way games are designed and played. By enabling believable behavior and intelligent pathfinding, AI brings virtual worlds to life, creating more engaging, challenging, and immersive experiences for players. Institutions like MAAC Academy, with its focus on game design and development, are at the forefront of equipping aspiring game developers with the skills and knowledge to harness the power of AI in their creations. As AI technology continues to evolve, we can expect even more innovative applications in the future, blurring the lines between reality and the virtual world.
0 notes
civilengineering-crimson · 1 year ago
Text
A Brief Survey of Regionalization Modeling; A Bonus from Gauged to Ungauged Basins_Crimson Publishers
Tumblr media
Runoff-rainfall models (r-r) have been widely used to manage water resources during past decades. One of the most important upfront hydrological issues, certainly in r-r prediction, is adopting the best calibration method [1]. Regarding the importance of calibration procedure in hydrological modeling, various types of methods and approaches have been practiced optimizing parameter values from manual and trial-error-based style to entirely automated, heuristic and sophisticated approaches. Automatic calibration approaches usually take advantage of modern search processes and algorithms to fit residual errors among observed and simulated data (using objective functions) to optimize parameter values [2]. In terms of hydrological modeling, particularly distributed models, changes in spatial characteristics of watersheds and resulting processes, are considered explicitly [3]. These types of models are fundamentally designed to bear various sorts of flow information and watershed attributes to model streamflow accurately and timely (i.e. Big Data Machine Learning [4]).
Read more about this article: https://crimsonpublishers.com/acet/fulltext/ACET.000566.php
For more articles in our journal:https://crimsonpublishers.com/acet/
0 notes
latencyfx · 1 year ago
Text
Quantum Machine Learning for Protein Folding
Proteins are complex 3-dimensional structures that play a vital role in human health. In their unfolded state, proteins can have a wide variety of structural forms; however, they must refold into their native, functional structure in order to function properly. This process is both a biological mystery and a critical step in drug discovery, as many of our most effective medicines are protein-based.
Despite its crucial importance, folding a single protein is an enormous computational challenge. The inherent asymmetry and complexity of proteins creates a rugged energy landscape that must be optimized using highly accurate models, which requires significant computational resources. The advent of quantum computing has the potential to dramatically accelerate the folding of complex proteins, paving the way for new therapies and diagnostics.
Quantum machine learning for protein folding has already yielded impressive results, demonstrating the power of quantum computation to tackle a diverse set of biochemical problems. The superposition of qubits allows for simultaneous simulation of multiple solutions, enabling exponential speedups compared to classical computers. These speedups, coupled with the ability to probe more complex energy states of proteins, can reveal novel insights into folding pathways and help accelerate drug discovery timelines.
Tumblr media
Recent studies have used hybrid quantum-classical computer systems to study the problem of protein folding. Casares et al 2021 combined quantum walks on a quantum computer with deep learning on a classical computer to create a hybrid algorithm called QFold, which achieves a polynomial speedup over the best classical algorithms. Outeiral et al 2020 used a similar approach, combining quantum annealing with a genetic algorithm on a classical computer to find low-energy configurations of lattice protein models.
Other approaches techogle.co have been explored using variational quantum learning, a form of approximate inference in which the optimization algorithm is informed by an empirical error model. For example, Roney and Ovchinnikov use an error model to guide their adiabatic quantum protein-folding algorithm to start from the lowest-energy conformation of a given amino acid sequence. Their algorithm then uses a heuristic search to locate a likely 3D protein structure.
While most experimental work to date has focused on simple proteins, recent theoretical developments have opened the door to applying quantum machine learning to more challenging proteins. In particular, researchers at Zhejiang University in Hangzhou have proposed a model that describes protein folding as a quantum walk on a definite graph, without relying on any simple assumptions of protein structure at the outset.
This work is an technology website exciting advancement in the field of quantum biology, but it is important to emphasize that it is still far from a clinically relevant model of protein folding. To be applicable to the study of real proteins, the model must be tested in simulations on a large scale. Currently available quantum computational devices have between 14 and 15 qubits; to simulate a protein of 50 amino acids, the number of qubits would need to grow to 98 or more. Quantum computers with higher qubit counts are under intensive development, and their application to protein folding may soon become a reality.
1 note · View note
Text
[ad_1] The Psychology of Shopping for: Understanding Shopper Habits The way in which shoppers make buying choices has at all times been a topic of curiosity for entrepreneurs and companies. The examine of client conduct delves into the psychological processes that affect an individual's shopping for selections. By understanding these underlying components, companies can develop efficient methods to influence and affect shoppers to purchase their services or products. Let's discover some key points of client conduct and the psychology behind shopping for choices. 1. Notion: Notion performs an important function in client conduct. It includes how potential consumers understand a product or model. Totally different people could have various perceptions based mostly on components resembling private experiences, beliefs, and cultural influences. For example, an individual who values sturdiness and high quality would possibly understand a product as dependable, main them to buy it. Entrepreneurs can affect notion by means of branding, promoting, and packaging. 2. Motivation: Individuals are pushed by varied motivations when making buying choices. These motivations will be categorized into 4 major sorts: utilitarian (assembly sensible wants), hedonic (in search of pleasure and pleasure), ego-defensive (defending self-image), and value-expressive (reflecting private values and identification). Understanding the motivations behind shopping for behaviors permits entrepreneurs to tailor their messaging and positioning to draw the target market. 3. Social Affect: People are social creatures, and social affect performs a big function in client conduct. Folks have a tendency to look at and imitate the actions of others, particularly these they understand as related or influential. Social proof, resembling testimonials, evaluations, and endorsements, can considerably impression an individual's choice to purchase. Likewise, the need for social acceptance and belongingness can drive people to buy merchandise or have interaction in sure behaviors. 4. Cognitive biases: Cognitive biases are unconscious psychological shortcuts that affect decision-making. These biases can lead shoppers to make irrational selections or be swayed by persuasive advertising and marketing methods. Examples of cognitive biases embrace the anchoring impact (relying too closely on preliminary data), the provision heuristic (overestimating the chance of occasions based mostly on available data), and the shortage impact (elevated want for scarce objects). Entrepreneurs can leverage these biases to create urgency, exclusivity, and shortage to affect shopping for choices. 5. Emotional components: Feelings play a big function in client decision-making. Research have proven that feelings have a extra substantial impression on shopping for choices than rational pondering. Optimistic feelings, resembling pleasure, pleasure, or happiness, can result in impulsive purchases. Alternatively, adverse feelings, like concern or nervousness, can set off a necessity for safety or self-improvement, main shoppers to purchase merchandise that alleviate these adverse feelings. 6. Submit-purchase conduct: Shopper conduct does not finish with the acquisition; it continues even after the transaction. The post-purchase conduct consists of evaluating the acquisition choice and forming an opinion concerning the product or model. Optimistic post-purchase experiences can result in model loyalty and repeat purchases. Quite the opposite, adverse experiences could end in dissatisfaction, adverse word-of-mouth, and even returns or refunds. Understanding client conduct is important for companies to create efficient advertising and marketing methods, improve buyer satisfaction, and enhance gross sales. By contemplating components resembling notion, motivation, social affect, cognitive biases, emotional components, and post-purchase conduct, entrepreneurs can tailor their messaging and positioning to enchantment to shoppers on a deeper psychological stage.
As client conduct regularly evolves, pushed by developments in know-how, societal modifications, and cultural shifts, companies should keep up to date and adapt their methods accordingly. Growing a complete understanding of the psychology of shopping for is a steady course of that may assist companies thrive within the ever-changing market panorama. [ad_2]
0 notes
myprogrammingsolver · 1 year ago
Text
Pacman Assignment:
Implement Breadth-first search in the class BFSAgent, Depth-first search in the class DFSAgent, A* search in the class AStarAgent, within the in pacmanAgents.py file, using admissibleHeuristic as a heuristic function for the AStarAgent. Notes: ⁃Python 2.7 is required to run the Framework. ⁃All your code must be inside the pacmanAgents.py file. RandomAgent and OneStepLookAheadAgent are…
Tumblr media
View On WordPress
0 notes
jcmarchi · 4 months ago
Text
Shielding Prompts from LLM Data Leaks
New Post has been published on https://thedigitalinsider.com/shielding-prompts-from-llm-data-leaks/
Shielding Prompts from LLM Data Leaks
Opinion An interesting IBM NeurIPS 2024 submission from late 2024 resurfaced on Arxiv last week. It proposes a system that can automatically intervene to protect users from submitting personal or sensitive information into a message when they are having a conversation with a Large Language Model (LLM) such as ChatGPT.
Mock-up examples used in a user study to determine the ways that people would prefer to interact with a prompt-intervention service. Source: https://arxiv.org/pdf/2502.18509
The mock-ups shown above were employed by the IBM researchers in a study to test potential user friction to this kind of ‘interference’.
Though scant details are given about the GUI implementation, we can assume that such functionality could either be incorporated into a browser plugin communicating with a local ‘firewall’ LLM framework; or that an application could be created that can hook directly into (for instance) the OpenAI API, effectively recreating OpenAI’s own downloadable standalone program for ChatGPT, but with extra safeguards.
That said, ChatGPT itself automatically self-censors responses to prompts that it perceives to contain critical information, such as banking details:
ChatGPT refuses to engage with prompts that contain perceived critical security information, such as bank details (the details in the prompt above are fictional and non-functional). Source: https://chatgpt.com/
However, ChatGPT is much more tolerant in regard to different types of personal information – even if disseminating such information in any way might not be in the user’s best interests (in this case perhaps for various reasons related to work and disclosure):
The example above is fictional, but ChatGPT does not hesitate to engage in a conversation on the user on a sensitive subject that constitutes a potential reputational or earnings risk (the example above is totally fictional).
In the above case, it might have been better to write: ‘What is the significance of a leukemia diagnosis on a person’s ability to write and on their mobility?’
The IBM project identifies and reinterprets such requests from a ‘personal’ to a ‘generic’ stance.
Schema for the IBM system, which uses local LLMs or NLP-based heuristics to identify sensitive material in potential prompts.
This assumes that material gathered by online LLMs, in this nascent stage of the public’s enthusiastic adoption of AI chat, will never feed through either to subsequent models or to later advertising frameworks that might exploit user-based search queries to provide potential targeted advertising.
Though no such system or arrangement is known to exist now, neither was such functionality yet available at the dawn of internet adoption in the early 1990s; since then, cross-domain sharing of information to feed personalized advertising has led to diverse scandals, as well as paranoia.
Therefore history suggests that it would be better to sanitize LLM prompt inputs now, before such data accrues at volume, and before our LLM-based submissions end up in permanent cyclic databases and/or models, or other information-based structures and schemas.
Remember Me?
One factor weighing against the use of ‘generic’ or sanitized LLM prompts is that, frankly, the facility to customize an expensive API-only LLM such as ChatGPT is quite compelling, at least at the current state of the art – but this can involve the long-term exposure of private information.
I frequently ask ChatGPT to help me formulate Windows PowerShell scripts and BAT files to automate processes, as well as on other technical matters. To this end, I find it useful that the system permanently memorize details about the hardware that I have available; my existing technical skill competencies (or lack thereof); and various other environmental factors and custom rules:
ChatGPT allows a user to develop a ‘cache’ of memories that will be applied when the system considers responses to future prompts.
Inevitably, this keeps information about me stored on external servers, subject to terms and conditions that may evolve over time, without any guarantee that OpenAI (though it could be any other major LLM provider) will respect the terms they set out.
In general, however, the capacity to build a cache of memories in ChatGPT is most useful because of the limited attention window of LLMs in general; without long-term (personalized) embeddings, the user feels, frustratingly, that they are conversing with a entity suffering from Anterograde amnesia.
It is difficult to say whether newer models will eventually become adequately performant to provide useful responses without the need to cache memories, or to create custom GPTs that are stored online.
Temporary Amnesia
Though one can make ChatGPT conversations ‘temporary’, it is useful to have the Chat history as a reference that can be distilled, when time allows, into a more coherent local record, perhaps on a note-taking platform; but in any case we cannot know exactly what happens to these ‘discarded’ chats (though OpenAI states they will not be used for training, it does not state that they are destroyed), based on the ChatGPT infrastructure. All we know is that chats no longer appear in our history when ‘Temporary chats’ is turned on in ChatGPT.
Various recent controversies indicate that API-based providers such as OpenAI should not necessarily be left in charge of protecting the user’s privacy, including the discovery of emergent memorization, signifying that larger LLMs are more likely to memorize some training examples in full, and increasing the risk of disclosure of user-specific data –  among other public incidents that have persuaded a multitude of big-name companies, such as Samsung, to ban LLMs for internal company use.
Think Different
This tension between the extreme utility and the manifest potential risk of LLMs will need some inventive solutions – and the IBM proposal seems to be an interesting basic template in this line.
Three IBM-based reformulations that balance utility against data privacy. In the lowest (pink) band, we see a prompt that is beyond the system’s ability to sanitize in a meaningful way.
The IBM approach intercepts outgoing packets to an LLM at the network level, and rewrites them as necessary before the original can be submitted. The rather more elaborate GUI integrations seen at the start of the article are only illustrative of where such an approach could go, if developed.
Of course, without sufficient agency the user may not understand that they are getting a response to a slightly-altered reformulation of their original submission. This lack of transparency is equivalent to an operating system’s firewall blocking access to a website or service without informing the user, who may then erroneously seek out other causes for the problem.
Prompts as Security Liabilities
The prospect of ‘prompt intervention’ analogizes well to Windows OS security, which has evolved from a patchwork of (optionally installed) commercial products in the 1990s to a non-optional and rigidly-enforced suite of network defense tools that come as standard with a Windows installation, and which require some effort to turn off or de-intensify.
If prompt sanitization evolves as network firewalls did over the past 30 years, the IBM paper’s proposal could serve as a blueprint for the future: deploying a fully local LLM on the user’s machine to filter outgoing prompts directed at known LLM APIs. This system would naturally need to integrate GUI frameworks and notifications, giving users control – unless administrative policies override it, as often occurs in business environments.
The researchers conducted an analysis of an open-source version of the ShareGPT dataset to understand how often contextual privacy is violated in real-world scenarios.
Llama-3.1-405B-Instruct was employed as a ‘judge’ model to detect violations of contextual integrity. From a large set of conversations, a subset of single-turn conversations were analyzed based on length. The judge model then assessed the context, sensitive information, and necessity for task completion, leading to the identification of conversations containing potential contextual integrity violations.
A smaller subset of these conversations, which demonstrated definitive contextual privacy violations, were analyzed further.
The framework itself was implemented using models that are smaller than typical chat agents such as ChatGPT, to enable local deployment via Ollama.
Schema for the prompt intervention system.
The three LLMs evaluated were Mixtral-8x7B-Instruct-v0.1; Llama-3.1-8B-Instruct; and DeepSeek-R1-Distill-Llama-8B.
User prompts are processed by the framework in three stages: context identification; sensitive information classification; and reformulation.
Two approaches were implemented for sensitive information classification: dynamic and structured classification: dynamic classification determines the essential details based on their use within a specific conversation; structured classification allows for the specification of a pre-defined list of sensitive attributes that are always considered non-essential. The model reformulates the prompt if it detects non-essential sensitive details by either removing or rewording them to minimize privacy risks while maintaining usability.
Home Rules
Though structured classification as a concept is not well-illustrated in the IBM paper, it is most akin to the ‘Private Data Definitions’ method in the Private Prompts initiative, which provides a downloadable standalone program that can rewrite prompts – albeit without the ability to directly intervene at the network level, as the IBM approach does (instead the user must copy and paste the modified prompts).
The Private Prompts executable allows a list of alternate substitutions for user-input text.
In the above image, we can see that the Private Prompts user is able to program automated substitutions for instances of sensitive information. In both cases, for Private Prompts and the IBM method, it seems unlikely that a user with enough presence-of-mind and personal insight to curate such a list would actually need this product  – though it could be built up over time as incidents accrue.
In an administrator role, structured classification could work as an imposed firewall or censor-net for employees; and in a home network it could, with some difficult adjustments, become a domestic network filter for all network users; but ultimately, this method is arguably redundant, since a user who could set this up properly could also self-censor effectively in the first place.
ChatGPT’s Opinion
Since ChatGPT recently launched its deep research tool for paid users, I used this facility to ask ChatGPT to review related literature and give me a ‘cynical’ take on IBM’s paper. I received the most defensive and derisive response the system has ever given when asked to evaluate or parse a new publication:
ChatGPT-4o has a low opinion of the IBM project.
‘If users don’t trust OpenAI, Google, or Anthropic to handle their data responsibly,’ ChatGPT posits. ‘why would they trust a third-party software layer sitting between them and the AI? The intermediary itself becomes a new point of failure—potentially logging, mishandling, or even leaking data before it ever reaches the LLM. It solves nothing if it just creates another entity to exploit user data.’
This objection seems self-serving and non-applicable, given the storied history of open source projects that genuinely benefit end-users through the concerted long-term efforts of community developers and contributors; and given the growing potential of mobile devices to run – and even train – lightweight LLMs. Certainly in this instance, the use case is not terribly demanding.
Next, ChatGPT confidently misses the point of having a lightweight LLM provide oversight of input towards a commercial LLM that could not possibly be run on a local machine (because the LLM is too big, and allowing local access is too risky for the company that makes it):
‘Effective privacy filtering requires understanding the input before deciding what to strip out. If an intermediary is smart enough to recognize sensitive data (names, addresses, financial details), then it’s already processing and analyzing that data—meaning it has to store or examine the very thing it’s supposed to erase.
‘This introduces a paradox: a) A dumb filter (basic regex substitution) will be ineffective and miss subtle privacy risks. b) A smart filter (LLM-powered rewriter) has to process the full input, making it just as dangerous as the original LLM.
‘If you trust the intermediary enough to read and rewrite your inputs, why not just trust the LLM itself?’
The answer to the final question here is that the local LLM is intended to be entirely on the side of the user, and inevitably open source, with minimal or zero need for network access. An equivalent commercial version, however well-intentioned at the outset, would eventually be vulnerable to corporate shifts and changes to the terms of service, whereas a suitable open source license would prevent this kind of ‘inevitable corruption’.
ChatGPT further argued that the IBM proposal ‘breaks user intent’, since it could reinterpret a prompt into an alternative that affects its utility. However, this is a much broader problem in prompt sanitization, and not specific to this particular use case.
In closing (ignoring its suggestion to use local LLMs ‘instead’, which is exactly what the IBM paper actually proposes), ChatGPT opined that the IBM method represents a barrier to adoption due to the ‘user friction’ of implementing warning and editing methods into a chat.
Here, ChatGPT may be right; but if significant pressure comes to bear because of further public incidents, or if profits in one geographical zone are threatened by growing regulation (and the company refuses to just abandon the affected region entirely), the history of consumer tech suggests that safeguards will eventually no longer be optional anyway.
Conclusion
We can’t realistically expect OpenAI to ever implement safeguards of the type that are proposed in the IBM paper, and in the central concept behind it; at least not effectively.
And certainly not globally; just as Apple blocks certain iPhone features in Europe, and LinkedIn has different rules for exploiting its users’ data in different countries, it’s reasonable to suggest that any AI company will default to the most profitable terms and conditions that are tolerable to any particular nation in which it operates –  in each case, at the expense of the user’s right to data-privacy, as necessary.
First published Thursday, February 27, 2025
Updated Thursday, February 27, 2025 15:47:11 because of incorrect Apple-related link – MA
0 notes
jhavelikes · 2 years ago
Quote
Large Language Models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations) which can result in them making plausible but incorrect statements [1,2]. This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best known results in important problems, pushing the boundary of existing LLM-based approaches [3]. Applying FunSearch to a central problem in extremal combinatorics — the cap set problem — we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve upon widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.
Mathematical discoveries from program search with large language models | Nature
0 notes
skillslash · 2 years ago
Text
A* Algorithm - Explained
A* Algorithm: the GPS of Computer Science 
A* algorithm is one of the most popular pathfinding algorithms in the world of computer science and AI. A* is an intelligent search algorithm that uses heuristic data to find the shortest way between two points on a graph. 
Tumblr media
In this article, we will explore the basics of A* and give you a detailed Python implementation.
1. Introduction 
Invented in 1968, the A* algorithm is a fundamental element in the solution of pathfinding issues. It is employed in a wide range of applications, including video games, robotics, GPS navigation, and network routing. The name "A*" translates to "A star," indicating its capacity to efficiently locate the most suitable path.
2. Basic Concepts 
2.1 Graphs and Nodes
At the core of the A* algorithm is a graph. A graph is a data structure that consists of nodes and edges. In the context of pathfinding, nodes represent points in a map or a network, and edges represent the connections or paths between those points.
2.2 Heuristics
Heuristics are rules of thumb or estimates used to guide the search for the optimal path. In A*, a heuristic function, denoted as h(n), provides an estimate of the cost from a given node to the goal node. The quality of the heuristic is crucial, as it greatly influences the algorithm's performance. A perfect heuristic would give the exact cost, but this is not always possible.
3. A* Algorithm
3.1 Algorithm Description
A* is a best-first search algorithm that considers both the cost to reach a node from the start node (g(n)) and the heuristic estimate of the cost from that node to the goal (h(n)). The algorithm maintains two sets, the open set and the closed set, to explore the graph efficiently.
3.2 Open and Closed Sets
The open set contains nodes to be evaluated. Initially, it contains only the start node.
The closed set contains nodes that have already been evaluated.
The algorithm repeatedly selects the node in the open set with the lowest f-score (f(n) = g(n) + h(n)), evaluates it, and adds it to the closed set. It then expands the node by considering its neighbors and calculates their f-scores.
3.3 Calculating the f-score
The f-score for a node n is calculated as follows: 
“ f(n) = g(n) + h(n) ”
where, 
g(n) is the cost of the path from the start node to node n.
h(n) is the heuristic estimate of the cost from node n to the goal.
3.4 Pseudocode 
Here's a high-level pseudocode for the A* algorithm (in Python): 
function A_Star(start, goal):
    open_set = {start}
    closed_set = {}
    g_score = {}  # Cost from start to node
    f_score = {}  # Estimated total cost from start to goal through node
    g_score[start] = 0
    f_score[start] = h(start)
    while open_set is not empty:
        current = node in open_set with the lowest f_score
        if current == goal:
            return reconstruct_path()
        open_set.remove(current)
        closed_set.add(current)
        for neighbor in neighbors(current):
            if neighbor in closed_set:
                continue
            tentative_g_score = g_score[current] + dist(current, neighbor)
            if neighbor not in open_set or tentative_g_score < g_score[neighbor]:
                g_score[neighbor] = tentative_g_score
                f_score[neighbor] = g_score[neighbor] + h(neighbor)
                if neighbor not in open_set:
                    open_set.add(neighbor)
                    neighbor.parent = current
    return failure
4. Implementation in Python 
Now, let’s implement the A* algorithm in Python: 
import heapq
def a_star(graph, start, goal):
    open_set = []
    heapq.heappush(open_set, (0, start))
    came_from = {}
    g_score = {node: float('inf') for node in graph}
    g_score[start] = 0
    while open_set:
        current_g, current = heapq.heappop(open_set)
        if current == goal:
            return reconstruct_path(came_from, current)
        for neighbor in graph[current]:
            tentative_g = g_score[current] + graph[current][neighbor]
            if tentative_g < g_score[neighbor]:
                came_from[neighbor] = current
                g_score[neighbor] = tentative_g
                f_score = tentative_g + heuristic(neighbor, goal)
                heapq.heappush(open_set, (f_score, neighbor))
    return None
def reconstruct_path(came_from, current):
    path = [current]
    while current in came_from:
        current = came_from[current]
        path.append(current)
    path.reverse()
    return path
5. Applications and Variables 
The A* algorithm has numerous applications, including:
Pathfinding in video games
GPS navigation
Network routing
Robotics for motion planning
Maze solving
Natural language processing
Several variations of A* have been developed to address specific requirements and challenges, such as Dijkstra's A* (which ignores the heuristic), Jump Point Search (optimized for grid-based maps), and Weighted A* (which balances heuristic and actual cost).
6. Conclusion 
The A* algorithm provides an efficient solution to pathfinding problems by finding the shortest path on a graph. It combines the actual cost of achieving a goal with a standard estimation of the cost to achieve that goal. 
An understanding of the fundamental concepts and how to implement A* can be beneficial in a variety of contexts, making it a fundamental algorithm to be familiar with for those interested in computational and artificial intelligence.
0 notes
spilledreality · 2 years ago
Text
The Stormcrow Fallacy
Hey Adam,
Sent a reply via Substack's automated email system. Re-trying through your proper email address (and doxxing my real name via academic email!) just in case the last attempt didn't get through. If you're just too busy to respond, no hard feelings, and I promise not to keep following up.
I've been discussing with friends lately what we've called the Stormcrow fallacy. In Tolkien—an author who, as a young literature student, I thumbed my nose at, but have lately come around to considering one of the great literary modernists & WWI novelists—Gandalf, because he consistently shows up in advance of, and brings news of, misfortune, is often mis-identified as bringing that misfortune with him. (This identification leads to his nickname of Stormcrow.) I'm loath to call things fallacies, as behavioral economics persistently misuses that term when labeling (what in my view are simply) heuristics. But this does seem to be a basic logical error of confusing correlation with causation—one which leads to e.g. a lot of suppression in bureaucracies of coming trouble. (See also Feynman on the Challenger disaster.) There are a ton of similar failure modes, right, that boil down to mistaken causal attribution? "Cargocult" being a favorite example of geeks worldwide.
Anyway, I particularly liked your informal description of consciousness as a Stormcrow, as the intern who only shows up when something breaks. Everything I've been learning about cognitive science & phenomenology the past few years seems to point in this direction. (Heidegger's ready-to-hand vs present-at-hand concept is one of the more concise illustrations, but see also William James on habit, or ideas in predictive processing about what sorts of errors might propagate to the highest level of the stack.) And, like Stormcrow, like Gandalf, what this consciousness appears to do is manage the hard problem of triage, of prioritization. Which is also, not coincidentally I think, a problem associated with wisdom. Wisdom and consciousness both, as I understand them, are means of adjudicating various priorities which demand resources (including but not limited to attention), across a body, a family, or a geopolitical unit. Hence the historical premium on wise father, wise king, etc as an adjudicator of a social ecology who maintains and restores equilibria. (Hence the association of wisdom with "perspective"—a temporally or spatially or interpersonally zoomed-out view.) This wisdom often operates through a "feel for the game" which cannot be verbally justified. (Not surprising given this is how humans solve many of their complex, high-uncertainty problems.) But its implicit function mimics the search for Pareto optimality, which is why it is so frequently characterized by ideas like moderation, balance, and tao-like navigations of Scylla and Charybdis for a sweet spot ("Goldilocks zone").
Yours with much appreciation and interest,
S.R.
1 note · View note