#hyperdimensional vector
Explore tagged Tumblr posts
Text
How much binary can we stuff in this bad boy? Let's find out!
0 notes
Text
Hyperdimensional Computing (HDC)

Hyperdimensional computing, or HDC, is a radical approach to computation that flips the script on conventional machine learning and artificial intelligence. Instead of relying on networks of neurons, layers of weights, and backpropagation, HDC operates entirely in the realm of high-dimensional vector spaces. The underlying mathematical and statistical properties of these high-dimensional spaces turn out to be so rich and well-behaved that they open up entirely new ways of representing and processing information.
Let’s start with the basics. In hyperdimensional computing, information is represented as high-dimensional vectors, often with thousands or even tens of thousands of components. These vectors can be binary, ternary, or real-valued, but the key is their sheer dimensionality. Once you go beyond a certain number of dimensions—say, 10,000 or so—something interesting happens. Randomly generated vectors in such high-dimensional spaces are almost always nearly orthogonal to each other and thus their cosine similarity is very close to zero. Amazingly, this creates a powerful mathematical backdrop for encoding and manipulating information.
Conventional machine learning relies on neurons and layers of weighted connections to build up complex representations through learning. The process involves adjusting these weights over many iterations using gradient-based optimization—basically nudging the system bit by bit until it eventually approximates the desired mapping from input to output. This makes learning slow, incremental, and computationally expensive. HDC bypasses this entire process. Instead of learning through weight updates, HDC encodes information directly into high-dimensional vectors using algebraic operations like binding, superposition, and permutation. The idea is that the statistical properties of high-dimensional spaces do most of the heavy lifting for you.
Binding is one of the core operations in HDC. Suppose you want to store a relationship between two pieces of information—like "Paris is the capital of France." In a neural network, you’d need to train the system over many examples until it could generalize this relationship. In HDC, you can encode it instantly by binding three high-dimensional vectors: one for "Paris," one for "capital," and one for "France." Binding is usually implemented using element-wise XOR or multiplication. The result is a new vector that statistically reflects the combination of the original components. If you later ask the system, "What’s the capital of France?" you simply reverse the binding operation using XOR, and the system will reconstruct the original vector for "Paris" with high fidelity. No training required—just a single algebraic operation.
Superposition is the second core operation. This allows you to combine multiple vectors into a single one by simply adding them together and thresholding the result. The beauty of high-dimensional spaces is that even when you overlap vectors, the statistical independence between them ensures that the original components remain distinguishable. This is like taking three different melodies, playing them at once, and still being able to identify each one separately. The redundancy and near-orthogonality of high-dimensional vectors make this possible.
Permutation, the third key operation, introduces order and structure. Suppose you want to encode the sentence "The cat sat." You can bind each word vector with a positional vector to preserve the order, then combine them using superposition. Later, when you want to reconstruct the sequence, you can unbind the positional information to recover the original order. This means that HDC isn’t just good for storing unordered facts—it’s capable of encoding complex sequences, hierarchies, and structured knowledge.
Now, the real magic happens when you combine binding, superposition, and permutation with the statistical properties of high-dimensional spaces. Because high-dimensional vectors are nearly orthogonal, even if noise corrupts a small fraction of the components, the overall vector structure remains intact. If you have a 10,000-dimensional binary vector and flip 100 bits, that’s only a 1% corruption rate. The system still retains enough redundancy to identify the original pattern because the statistical gap between similarity and randomness is large enough to preserve the signal. This is what makes HDC so robust to noise, missing data, and partial corruption. In conventional machine learning, adding noise to the input usually degrades performance. In HDC, the system shrugs it off.
Another fascinating consequence of high dimensionality is that similarity becomes a statistical property rather than an exact match. In low dimensions, two vectors either match or they don’t. In high dimensions, vectors that share structural similarity have a high cosine similarity even if some individual components differ. This means HDC naturally supports generalization. You don’t need to see all possible variations of a pattern to classify it correctly—if the vector aligns statistically with a known template, that’s enough.
This statistical similarity is what allows HDC to handle pattern recognition, associative memory, and reasoning tasks in a single framework. In conventional machine learning, you need different models for classification, clustering, and memory. In HDC, the same algebraic operations handle all of these tasks. If you want to classify a new pattern, you compute the cosine similarity between the input vector and stored patterns. If you want to retrieve a memory, you use XOR-based unbinding. If you want to combine patterns, you use superposition.
HDC also excels at one-shot learning. Neural networks require large datasets and long training times because they need to iteratively adjust weights to converge on a solution. In HDC, you can store a new concept instantly by encoding it as a high-dimensional vector. If you learn that "Athens is the capital of Greece," you just bind "Athens" with "capital" and "Greece" and store the result. Done. No retraining, no weight adjustment—just add a new vector to memory.
Clustering works differently in HDC too. In low dimensions, clustering relies on absolute distance measures, which become unreliable in high-dimensional spaces because of the curse of dimensionality. In high dimensions, however, angular similarity (or cosine similarity) remains meaningful. Even when absolute distances between points become uniform, angular similarity continues to reflect meaningful structural alignment. This is why clustering algorithms based on cosine similarity outperform Euclidean-based methods in high dimensions.
HDC contrasts sharply with deep learning in how it handles complex patterns and feature discovery. Deep learning relies on multi-layer networks with nonlinear activations to discover abstract, hierarchical patterns. Early layers detect low-level features, middle layers detect intermediate patterns, and deep layers combine these patterns to detect complex structures. This hierarchical learning allows deep networks to uncover deep correlations in data that might not be obvious at the surface level. HDC, in contrast, encodes information directly at the vector level without hierarchical feature extraction. There are no hidden layers in HDC—everything is encoded into a single vector through binding and superposition.
This makes HDC fast and noise-resistant, but it limits its ability to discover deep, hidden patterns. For example, a CNN can learn to detect the face of a cat by first recognizing edges, then combining them into shapes, and finally forming a complex object. HDC can store and retrieve a known cat face quickly and robustly, but it wouldn’t "discover" the face on its own without being given a structured encoding of the pattern.
However, hybrid models combining deep learning and HDC are starting to emerge. A deep network could be used to extract hierarchical features, which are then encoded into an HDC vector for fast memory and retrieval. Alternatively, HDC-style binding and superposition could be incorporated into the hidden layers of a neural network, making it more robust to noise and enabling faster learning. The combination of deep pattern discovery and HDC-style encoding could create systems that learn quickly, generalize well, and remain robust under noisy conditions.
In the end, HDC and deep learning aren’t competing approaches—they are fundamentally different models of computation. Deep learning simulates the layered, hierarchical nature of biological brains, while HDC leverages the mathematical properties of high-dimensional vector spaces. Deep learning excels at discovering hidden patterns through transformation, while HDC excels at fast encoding, storage, and retrieval of structured patterns. The future of AI might not lie in choosing one over the other, but in finding a way to make them work together.
1 note
·
View note
Link
Artificial intelligence is rapidly advancing, with a significant focus on improving models that process and interpret complex datasets, particularly time series data. This type of data involves sequences of data points collected over time and is cri #AI #ML #Automation
0 notes
Text
Beyond the Bits: The Power of Hyperdimensional Computing
Hyperdimensional Computing The high-dimensional and distributed information representation found in the brain serves as the model for Hyperdimensional Computing (HDC), a machine learning technique. Hypervectors serve as the foundation for HDC, which, like other AI techniques, is implemented through training and inference stages.
HDC Hypervectors: Instruction and Interpretation Hypervectors, which are enormous vectors, are how Hyperdimensional Computing represents data. Hypervectors can be examined for similarities and differences using the fundamental Hyperdimensional Computing operations (similarity, bundling, binding, and permutation), and an encoding system can be created.
Training: In training, input feature vectors are transformed into hypervectors, which are compared and grouped into classes according to how similar they are or are not to each other. Ultimately, a class hypervector that represents every class element is created by bundling the hypervectors within each class. An HDC-based classification system can be developed thanks to this class hypervector.
Inference: Each new feature vector is similarly converted to a hypervector once training has succeeded in creating class hypervectors from a series of input feature vectors. This allows for a quick, effective, and reliable classification system to be established by comparing the new hypervector to the pre-existing class hypervectors and determining whether the data is similar.
The inherent advantages of a classification system based on HDC are as follows: Efficiency: Compared to conventional techniques, hypervectors provide the efficient and compact representation of complicated patterns, improving processing and classification jobs’ efficiency.
Minimal Power: Hyperdimensional Computing can be implemented in hardware with minimal power consumption because of its simplicity and the binary nature of its operations (XOR, AND, etc.). This is especially helpful for edge computing applications, wearable technology, and Internet of Things devices where energy efficiency is critical.
Extremely Parallel: Hyperdimensional Computing‘s distributed architecture permits parallel processing, which is comparable to the brain’s capacity for parallel processing. For classification problems, this can greatly reduce computing times.
Quick Learning: Unlike deep learning models, which frequently need a large amount of training data, Hyperdimensional Computing may do one-shot or few-shot learning, in which the system learns from a relatively small number of samples. Because of this feature, HDC is very beneficial in situations when data is limited or changing quickly.
Robust and Reliable: Due to its great dimensionality, Hyperdimensional Computing is intrinsically resistant to mistakes and noise. Reliable categorization is possible even in noisy situations because little adjustments or distortions in the input data do not dramatically alter the overall representation. Furthermore, the manipulation of hypervectors involves a large number of simple, repetitive operations, which makes them very suitable to hardware platform acceleration.
The efficient, low-power, highly parallel, and hardware-implementable nature of Hyperdimensional Computing applications makes them perfect for running on Altera FPGAs, which are also extremely parallel and efficient.
Utilising Altera FPGAs and oneAPI Intel oneAPI Base ToolKit simplifies cross-architecture, high-performance application development. The Intel oneAPI Base Toolkit implements the industry standard and works with FPGAs, CPUs, GPUs, and AI accelerators.
The development process is made simpler by oneAPI, which is one of its primary advantages. Developers don’t need to learn multiple programming languages to construct SYCL/C++ apps that operate on diverse architectures when they use oneAPI. This saves a tonne of time and effort for developers because they can write code only once and have it operate on multiple processors. They can also create software applications and deploy them on the most economical and efficient platform without having to modify the code.
With Intel oneAPI Base Toolkit, Hyperdimensional Computing applications written in SYCL/C++ can be immediately implemented into Altera FPGAs.
To get started with oneAPI for FPGAs, see Boosting Productivity with High-Level Synthesis in Intel Quartus with oneAPI for information on tutorial examples, self-start learning videos, and reference designs.
Hyperdimensional Computing Picture Categorization using Altera FPGAs Image classification is one scenario where HDC is applied in a practical setting. An HDC image classification system was implemented on an Altera FPGA for training and inference in this example, which was developed by Ian Peitzsch at the Centre for Space, High-Performance, and Resilient Computing (SHREC).
While the data flow is similar in both scenarios, the computations differ. Streaming feature vectors are sent to the FPGA from the host. Every vector is directed to one of eight compute units in the training flow and one of twenty-five compute units in the inference flow.
Training: The FPGA receives a stream of feature vectors from the host. The parallel encode compute units (CUs) receive each vector. Following the bundling of the output-encoded partial hypervectors, a label is produced. The class associated with the label receives the bundle of hypervectors. Following the processing of all the training data, the class hypervectors are streamed to the host in order to be normalised. Inference: The FPGA receives a stream of feature vectors from the host. Every vector is sent to CUs that are parallel encoded. To create a single hypervector, the data is routed to the classification kernel. Every class hypervector is compared to this one, and the prediction belongs to the class that has the greatest resemblance. After then, the host receives a stream of the prediction.
HDC classification algorithms are a perfect fit for Altera FPGAs due to intrinsic properties of both oneAPI and FPGAs, in addition to the fact that Hyperdimensional Computing applications are often quite accessible to hardware implementation.
The SYCL language’s Universal Shared Memory (USM) feature is supported by oneAPI. In the code-based and final hardware, SYCL USM uniquely permits the host and accelerator to share the same memory, unlike C or C++ alternatives. This lowers system latency to enhance overall performance and permits the simple, industry-standard coding technique of using a pointer to explicitly access data, whether on the host or the accelerator. The bottleneck is at the encoding step. The parallel nature of a programmable FPGA enables the usage of many compute units in parallel because each dimension of a hypervector can be encoded independently. When utilising several compute units, the inference time is greatly decreased by this parallel processing. During training and inference, parallel processing was employed in the encoding stage, and Altera FPGA’s USM feature guaranteed a low latency solution.
At AI Inference, Altera FPGAs Perform Well This Hyperdimensional Computing image classification algorithm was built on CPU, GPU, and FPGA resources in a real-world evaluation1.
Cascade Lake-based Intel Xeon Platinum 8256 CPU (3.8GHz, 4 Cores). The eleventh generation Intel UHD 630 GPU. FPGA Intel Stratix 10 GX. An Hyperdimensional Computing classification model with 2000 hyperdimensions of 32-bit floating-point values was utilised in all implementations. All three implementations obtained similar accuracy of about 94-95% using the NeuralHD retraining algorithm.
The outcomes highlight the advantages of utilising an FPGA during the inference phase. The total amount of time spent training the model is the important metric. Latency is the key performance indicator during inference.
Through training, the FPGA provided an x18 speed-up and the Intel GPU reached a 60x speed-up over the CPU. On the other hand, the accuracy of the CPU and FPGA implementations was about 97%, whereas the GPU implementation only managed to reach about 94% accuracy.
Due to memory constraints in this training scenario, the FPGA could only employ 8 CU in parallel during encoding. More parallel CUs and a quicker FPGA training period might be possible with a bigger FPGA. The evaluation was carried out using fixed resources on Intel DevCloud, and a larger FPGA was not accessible.
Benefits of FPGA AI Latency With a 3× speed increase over the CPU, the FPGA exhibits the lowest latency during the inference stage. It is demonstrated that the GPU is overkill for inference, requiring significantly more time than the FPGA or even the CPU (demonstrating a slow-down rather than an acceleration).
These findings demonstrate the advantages of utilising an FPGA for AI inference once more. Altera FPGAs are the best option for quick and effective AI inference in data centre and edge applications because they combine the advantages of performance and generally lower costs with license-free oneAPI development tools. Any AI algorithm is trained/developed once and then implemented for inference on a mass scale.
Read more on Govindhtech.com
0 notes
Text
Hyperdimensional Computing Enables New Possibilities for Artificial Intelligence
Embedded YouTube Video An ANN will need to be able to distinguish between four different colors based on the shape, such a circle. This is the starting point for a radically different approach to computation, known as hyperdimensional computing. For this purpose, he has invented a new type of vector, or hypervector. A hyperdimensional vector is an array of ten thousand numbers, say, representing…
View On WordPress
0 notes
Text
Hyperdimensional Computing Reimagines Artificial Intelligence
By imbuing enormous vectors with semantic meaning, we can get machines to reason more abstractly—and efficiently—than before. from Science Latest https://ift.tt/yr28LSV via IFTTT
0 notes
Note
Dear Vector Prime, do you ever get bored? Are there parts of your life that you find boring?
Dear Laconic Listener,
The Guardian of Space and Time must never succumb to boredom; I simply do not have the luxury of such a thing. At the scale of energies that I perceive and manipulate, there are a host of hyperdimensional creatures that might take advantage of such a mental state, and I must protect myself against them. For example, there are the dreaded meta-scraplets. Instead of drilling into a Cybertronian body, meta-scraplets devour resolve and willpower. Indeed, one of the early symptoms of a meta-scraplet infection is listlessness and excessive boredom, which they intensify through their feeding. Safeguard and I scan each other for them frequently���better safe than sorry.
Of course, even my spirits need recharging from time to time, and I do have a few tried-and-true pick-me-ups. Music has helped me through rougher times—I visit a music "Dis-chord" for inspirational anthems, where Safeguard swaps tracks with a Recordicon named Mixtape, and we discuss which of us has better taste in music. I try to spend periods of idleness in meditation—the techniques I employ have their roots in Circuit-Su—but sometimes I forget I'm meditating and doze off into a sleep cycle, which is its own problem! How do I know if I'm dreaming or just unconsciously transported to another universe?
I suppose I am blessed with a few rare advantages when it comes to staving off boredom. If I come across a show I like, but find myself sitting through a lot of filler, I can simply travel to the future to see how things turn out. My misadventures throughout the multiverse are often fraught, but never boring. And, of course, I have all of you to keep me busy.
#ask vector prime#transformers#maccadam#vector prime#meta-scraplets#safeguard#dis-chord#recordicons#mixtape#circuit-su
14 notes
·
View notes
Text
SCP-5155: The Plush Toy (Final Empire)
Item #: SCP-5155
Object Class: Safe
Special Containment Procedure: SCP-5155 primarily resides in a 9.1 m x 9.1 m (30 ft x 30 ft) room in Minimal Security Site 21. It is to be afforded full access to any items that it requests, and it is to be provided with contact with on-site personnel at its convenience. Contact with the children of on-site personnel is to take place on at least a weekly basis at [redacted]. Children are to be instructed according to Document 5155-A or 5155-B depending on their age and level of cognitive development.
SCP-5155 is to be kept informed of any and all events deemed relevant as outlined and discussed in Document 5155-C. In the event of major containment failure at any of [redacted], SCP-5155 is to be informed immediately and its schedule altered to allow for immediate deployment and mobilization. Anyone attempting to impede SCP-5155 from deploying may be subject to immediate termination.
In addition to its personal quarters, SCP-5155 is to be allowed access to the research facilities available on site, as well as any additional research facilities noted in Document 5155-D. Research personnel are expected to cooperate with SCP-5155 and to seek out its advice in any of the matters relating to [redacted]. Time is to be set aside each day for Foundation researchers to consult with SCP-5155 on other matters. This time should not be less than one hour each day unless SCP-5155 is currently deployed.
SCP-5155 is to be kept appraised of the deployments of [redacted], [redacted], and [redacted]. Furthermore, weekly training sessions are to be organised with the aforementioned parties at a time and site of SCP-5155’s choosing. The primary objectives of each training session are up to SCP-5155 but are to serve the broader goal of improving the training, expertise, tactics, strategies, and overall effectiveness of Foundation operatives in the field.
Should SCP-5155 be required in the field, [redacted] and [redacted] are to accompany it whenever practicable. Personnel in the field are expected to obey its commands and directives, and SCP-5155 should be treated as the senior officer in any and all field engagements with the exception of [redacted]. SCP-5155 should not, under any circumstances, be allowed unapproved or unsupervised contact with hostile Keter Class Threats, as it will almost certainly attempt – and most likely succeed – to terminate them with extreme prejudice, regardless of the level of collateral damage involved (see e.g., Document 5155-E for a summary of several such incidents).
At no time should any personnel attempt to remove or alter the decorations in SCP-5155’s personal quarters. Of particular importance is the symbol on the far wall, which depicts what appears to be a snowflake with a rose in the middle. Any attempt to remove, alter, or deface the symbol has been met with extreme hostility. If personnel attempt to interfere with the decorations, SCP-5155 may become hostile and refuse to cooperate with Foundation personnel in any way shape or form for a period of not less than 24 hours.
Under no circumstances should personnel harm, or allow harm to befall, a child while SCP-5155 is present. SCP-5155 has demonstrated a willingness to use extreme force to secure the safety of children and others it considers innocent of wrongdoing.
Description: SCP-5155 is a plush toy roughly 30 cm (1 ft) in height. Although its appearance is heavily stylised, it seems to have been made in the image of a pink-haired woman with blue eyes who wears a white outfit and has a red cape. SCP-5155’s usual expression is a scowl or a glare although it has occasionally be seen to smile, most often in the presence of children.
SCP-5155 is extremely intelligent. It claims to have assimilated the contents of the entire Internet in a matter of minutes, and it has demonstrated an immense level of understanding in a range of different fields from science and technology to art and psychology. It has the ability to design and construct devices that utilise technology far beyond current human capabilities. Present estimates of its computing power suggest that it has abilities exceeding the combined computing power of the world by at least several orders of magnitude. SCP-5155 is capable of interfacing with any and all currently known computer systems. Furthermore, it can access any and all existing networks, regardless of security or location. In practice, it has access to any information currently stored in a digital format within its range. Its maximum range is currently unknown, but it is at least large enough to encompass the entire planet.
Although SCP-5155 appears to be extremely cuddly, it is actually composed of a substance with highly variable properties. Testing has revealed that SCP-5155 seems to be completely impervious to all presently known forms of physical, chemical, biological, and energy-based attack. Tests using reality-warping or psychic SCPs suggest that it has similar levels of immunity to those vectors of attack as well.
Apart from being able to alter its body into various other shapes (e.g., turning its arms into blades), SCP-5155 has access to a subspace pocket of unknown size. Contained within this pocket are a tremendous variety of weapons. During testing and fieldwork, SCP-5155 has shown no hesitation in naming and explaining the approximate properties of its weapons, claiming that humans lack the technology to reproduce them.
The current list of weapons known to be available to SCP-5155 is as follows:
Plasma weapons
Rail guns
Ballistic firearms
Warp weapons
Matter decomposition weapons
Photonic-kinetic weapons
Fusion weapons
Anti-matter weapons
Singularity weaponry
EM weaponry
False vacuum weaponry
Hyperdimensional weaponry
Reality altering weaponry
In addition to its immense durability (SCP-5155 has yet to be observed taking damage), SCP-5155 also has access to a broad variety of defensive measures, many of which can be deployed from its subspace pocket. These defensive measures appear to be counterparts to its offensive weaponry, and SCP-5155 has stated that despite its formidable arsenal of weapons, it is actually more adept at defence. These claims were verified when it shielded the entire planet from a bombardment launched by SCP-2399 before destroying the other SCP with its retaliatory strike. Of particular note is that SCP-5155 was able to do this while sitting on the lap of a child and showing no visible signs of effort or strain.
Based on data derived from the aforementioned incident and other field and laboratory tests, it is difficult to ascertain the true scale of SCP-5155’s strength. Suffice it to say, it could easily wipe out humanity if it was so inclined. Following the incident with SCP-2399, SCP-5155 was briefed on [redacted]. [redacted], and [redacted], and it agreed to intervene should any of those occur. It has also agreed to help in containing SCP-682 should it ever escape containment.
SCP-5155 is capable of speech although it seldom talks unless necessary. It is also capable of flight as well as teleportation (range currently unknown), and it can move through any medium (e.g., water, mud, concrete) with equal ease. Although it has yet to demonstrate, SCP-5155 claims to be capable of superluminal travel. When not otherwise engaged, SCP-5155 will typically commandeer the lap of nearby personnel for at least 15 minutes. SCP-5155 shows a preference for the laps of women or children. SCP-5155 may also ask to be carried around like a regular plush toy, and personnel are advised to comply.
Document 5155-F: SCP-5155 was discovered after preventing the kidnapping of a child by an instance of SCP-1788-1. Foundation personnel approached SCP-5155, and it proved amenable to communication, explaining its situation and acting as though it had full knowledge of the Foundation’s purpose and objectives. Given what is now known about its abilities, it is entirely possible that was the case.
After agreeing to accompany Foundation personnel to an interview, SCP-5155 provided more details regarding its situation. It claimed to have been created by a multiverse-spanning empire of unparalleled might and glory, which it referred to as the ‘Grand Empire Alliance’.
According to SCP-5155, it was in transit when it encountered a massive dimensional anomaly that left it stranded not far from the park. It further claimed that it was already familiar with Earth, albeit not this version of Earth. Upon further questioning, it elaborated, stating that the Grand Empire Alliance encountered its first version of earth thousands of years ago before encountering other versions of Earth during its expansion throughout the multiverse. However, SCP-5155 claimed that this particular version of Earth was not currently mapped or known to its creators.
After a demonstration of its abilities and consultation with the O5 Council, the decision was made to broker a deal to secure the assistance of SCP-5155. Although SCP-5155 claimed to be able to open a portal back to its home dimension, it stated that it could not do so for moral reasons since opening such a portal would lead to the destruction of our universe. Instead, SCP-5155 would help the Foundation in exchange for help in acquiring the resources it required to construct a less dangerous portal.
Of particular interest was the reason SCP-5155 gave for why its portal would do so much damage. It suggested that the fabric of our universe had already suffered substantial damage due to the intervention of several higher-dimensional beings. Although SCP-5155 could not name these beings, it was able to provide descriptions of their abilities based on the damage its scans had detected. The descriptions it provided matched almost perfectly with [redacted], [redacted], and [redacted].
After being informed that some of these beings were still present, SCP-5155 calmly stated that it would rectify that before leaving. To date, SCP-5155 has already eliminated a number of existential threats and has developed plans to deal with more.
15 notes
·
View notes
Quote
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, nanoscale hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the ring-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant in modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. This latter property opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. Vector Symbolic Architectures are Turing complete, as we show, and we see them acting as a framework for computing with distributed representations in myriad AI settings. This paper serves as a reference for computer architects by illustrating techniques and philosophy of VSAs for distributed computing and relevance to emerging computing hardware, such as neuromorphic computing.
[2106.05268] Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware
0 notes
Photo

New Metatron Channel Archangel Metatron via James Tyberonn
Keepers of the CrystalsThe Blue Races of Hollow Earth
Copyrights Reserved 2017 Greeting Masters, I am Metatron, Angelic of Light with Tyberonn of Crystalline Service.
We greet you in a vector of unconditional love! Dear Humans, As the frequency of the new earth continues to increase, so does the vibratory rate of the crystals. This is by purpose, by plan, for indeed the major crystal fields of the planet are playing major roles in the dissemination of the crystalline pattern.
Indeed, the conscious mineral kingdom of the Earth is awakening to new levels of importance and influence. It is in fact the 'Crystalline' regions of the Earth that are playing the most prominent role in the shifts and upgrades in dimensional frequency. And we assure you, logically, this benevolent influence & resulting crysto-pineal stimulation is of great benefit for humanity. More and more of you are feeling the draw to visit crystalline energy fields, and indeed there is intuitive understanding & purpose of great significance in doing so. We assure you, there is a divinely orchestrated & purposeful call within the crystalline members of the conscious mineral kingdom to increase the generation & flow of vital life force, of coded geo-energy that it receives from the crystalline core of the earth, Saturn and from the central sun.
In essence the massive crystal fields of Arkansas & Brazil act as 'motherboards', to use your computer vernacular. They receive codes, convert them and communicate, or project, the new crystalline pattern to the planet and humanity in the ongoing up-shift. To no small degree they are monitored and maintained by the Masters of the Sirian-Pleiadean- Arcturian Alliance, with assistance from certain 'Omni-Earthplane' races of what may be termed as the 'Blue Beings. Parallels & Blue Beings And so in this channeling we are asked to speak on the 'Blue Beings' & parallel dimensional realms, with emphasis on the region of the Crystalline Vortex of Arkansas. Humanity tends to think of the earth as its own, yet the conscious earth has many levels, many parts termed as the Omni-Earth that combine to make a complex multidimensional whole. The Earth was occupied long before human souls manifested into physicality. Races from other systems of the physical and non-physical omni-verse have been visiting and indeed dwelling on the earth for billions of years. And in fact, the animal, plant & mineral kingdoms were well established eons before humanity began their earthly reign over physical dominions. Physical mankind, in a very relative sense, are the comparative newcomers.
Dimensional Clarity In the Crystalline shift, of 2017 and beyond, these 'parallel' realms will become more tangible to you, and in due time, you will meet your 'next-door' dimensional neighbors. There are numerous succinct energy forces that create a fertile field of dimensional parallels. Crystalline minerals, noble & conductive metals, radioactive minerals, tectonic pressure, dynamic aquifer systems and volcanics each project certain vectors of vibrational frequency which can potentially become unique concentric capacitors of life forms within parallel multi-dimensionality. Such vectors enable anomalies in the space time continuum & provide valid realms of parallel physical, pseudo-physical & non matter domains. It is natural and fitting that the crystalline deposits on the planet are among the first to display remarkable evidence of life forms that were previously less tangible.
Atlantis & Star Nation Because of the unique crystalline geology of what is now termed as 'Arkansas', this hi-frequency land was & remains uniquely attractive to all life. Indeed the crystal-geology indigenous to Arkansas was chosen in past time epochs as a colony of not only the 'Atla-Ra' Atlanteans, but also of the blue-skin races of Lumania & LeMuria , as well as the benevolent 'Star-Nation ET's, of the 'Sirian-Pleiadian-Arcturian Alliance'. The crystalline energy of the massive quartz & mineral deposits in both Arkansas & Brazil have projection field vectors that are visible (and tangible) from far above the earth sphere. The double spin vortex of Arkansas reaches well above the stratosphere, and forms what may be termed as a major highway, a dimensional corridor, or wormhole, which facilitates entry. The ET, in your vernacular, the Star-Nation were the first to establish bases in the area of the crystals eons before humanity entered the earth-plane. The massive crystals and crystal caverns of Ark formed approximately 250-300 million years ago, in a time in which waters of oceans and lakes of glacier melt competed for containment, in a give and take of ebb & flow. Extensive waters covered much of the area that is now Arkansas. The Star Beings established bases in the crystals regions of Arkansas roughly 100 million years ago. The Ouachita & Ozark Mountains in this time were considerably higher and appeared as large islands. Subterranean bases were established within the earth, below the surface waters, in the massive caverns.
Pleiades & the Atla-Ra It was in fact the Pleiadian contingency that advised the Atlanteans of the Crystal sources in Arkansas (and Brazil)....and were indeed responsible for much of the technology developed in Atlantis with crystalline quartz. The Pleiadeans were very instrumental in the early stages of Atlantis, and dwelt on (and in) the omni-earth without permanent manifestation into physical realms. The Atla-Ra, those termed as the scientist priest of Atlantis, were in fact trained by the spiritual Pleiadeans. They were initiates of a great wisdom school involving disciplines of sacred science & spiritual mastery of the 'Law of One'. The Atla-Ra became highly developed mystics with extraordinary powers of mind, yet adhering to high values of spiritual morality and altruistic ideals. The Atla-Ra were the benevolent & intellectual mystical scientist priests, of male, female & androgynous beings, that were the initiated 'Masters of the Crystals'. In the first 2 periods of the three phases of Atlantis, only the Law of One wielded control, and benevolently so, of the crystals. Only in the final phases were non-initiate Aryans allowed into the knowledge, and they sadly betrayed this wisdom.
Editor Insert: (Edgar Cayce Reading 440-5) About the firestone that was in the experience did the activities of the entity then make those applications that dealt with both the constructive and destructive forces in the period....The building above the stone (Tuaoi Crystal) was oval, or a dome wherein there could be or was the rolling back, so that the activity of the stone was received from the sun's rays, or from the stars; the concentrating of the energies that emanate from bodies that are on fire themselves - with the elements that are found and that are not found in the earth's atmosphere. The concentration through the prisms or (crystal) glass, as would be called in the present, was in such a manner that it acted upon the instruments that were connected with the various modes of travel, through induction methods ...The preparation of this stone was in the hands only of the initiates at the time...' The Atlanteans of the Atla-Ra were primarily of Pleiadian star-seed. The 'Golden Ones' as they were termed, were what may be considered as giants in current times, standing between 10 - 12 feet in height, and in some cases extended to 15 feet. They were generally well proportioned, born into a naturally athletic physiology. The 'Golden' Atlanteans had a skin toning or hue of a burnished yellow-gold, and typically had blond hair, and would appear similar to the 'Nordic' race of Pleiadeans, from which they were seeded... except for the skin tone. Their average life expectancy extended from 900 to 1200 years, although some were capable of rejuvenation and lived beyond 7000 years. Certain of your present seers have predicted that evidence of Atlantis is to be found in Arkansas; (Editor's Note-Kryon & Tyberonn) ... but we tell you it will not be discovered on the surface, as an archeological remnant. We will add that the most viable surviving 'surface' remnants of ancient Atlantis are visible in Peru, Central America, Egypt & Mediterranean areas, all of which were Atlantean colonies.
Question to Metatron: Edgar Cayce also spoke of 3 Halls of Records preserved underground by the Atlantean, Iltar, before the final sinking. Are any part of these in Arkansas or Brazil? Are their tunnels connecting to these Hall of Records?
Metatron Response: That which is termed the three 'Atlantean Hall of Records', do still remain, and all are all subsurface. These 'Halls' are not physically in Arkansas nor Brazil, but may be more easily accessed mentally, visually in crystalline arenas, for the piezioelectrical aspects of the Crystal Vortex stimulates and opens the pineal, the 3rtd eye. Yet in both Arkansas and Brazil, there are intricate tunnel systems, connecting to hollowed chasms and caverns of the Atlantis & LeMuria eras networked below the ground at varying depths. Certain of these are remarkably preserved and are quite astonishing. In truth the evidence of Atlantis in Arkansas has been known by post-flood Native Americans for centuries, since the time of the early Mound Builders, who were migrants from the Atlantean antediluvian colonies in Yucatan and Poseida. Although the most famous of the remaining mounds is in Ohio (Serpent Mound), the Atlantean Red Race established communities throughout North America, in virtually every state of the lower 48, and in south-eastern & western Canada. The Caves, specifically in Arkansas & Brazil, contain not only remnants but ongoing clusters and bases, which are quite active in the present, albeit the active sections are in hyperdimensional fields. As your Edgar Cayce noted, the Atlanteans developed technology that allowed for the creation of 'tubes, tunnels and elevators', in the subterranean portions of the upper & mid mantle.
(Ref Edgar Cayce Readings # 2157-1 & # 1730-1) These tunnels & shafts are networked throughout much of subterranean portions of Arkansas, most of which are many kilometers beneath the surface.
Question to Metatron: There have been numerous reports of people in Arkansas encountering beings in the cave systems of Arkansas, most report them as 'blue skin' humans, can you speak on this please?
Response: The Cave systems within the Crystal Vortex of Arkansas contain many anomalous & mysterious aspects, and are indeed home to underground communities of what is termed 'Hollow Earth'. We will validate the veracity of communities beneath the lands of Arkansas. There have been instances in the past in which shamanic mystics of Native Americans have interfaced with them. In more recent times, residents of Arkansas have explored caves in the area, and also encountered the blue skinned race. But it must be clarified, that in the publisized, relatively recent incidents, these beings were very briefly 'imaged', and not truly seen, not physically encountered, in a 4th dimensional environ, somewhat in the same resonance of what may be termed as dream-scape astral dimension. This was enabled in part, due to the piezoelectric minerology within the caverns, the tectonic energies of the New Madrid Fault, combined with magnetic fields within the caverns. These influences facilitate, activate what may be termed multidimensional vision, directly stimulating the piezo crystalline attributes of the pineal gland. The related encounters with tall beings & blue skins beings in the caverns near Cushman, (Glass Cave), as reported, again, in more recent times by the spelunkers, occurred in the same vision-esque matrix. The visuals experienced by the explorers occurred in an altered state, a dreamlike high theta consciousness, induced minerologically & magnetically ... well beyond normal beta wave, above 3d. The explorer's accounts of the experience, were quite confusing to them, nebulous & distorted in memory, and also somewhat exaggerated & embellished over time. But indeed, the Blue-Beings are factually present, and the alternate parallel encounters did occur, and will occur more often in the new energy of the planet. We will add that uniquely in both Arkansas & Brazil, the massive crystals can and do externally project their consciousness in a rather novel 'faerie-esque' format. These appear in a myriad of projected forms, most commonly appearing as blue translucent 'orbs'. These also may occur as darting wisps of blue tinted photonics, piezo-luminescent mini lightning bolts, similar in appearance to what is termed in Faerie-Devic mythology as 'Will of the Wisp'.
The Blue Beings Now, to further clarify, the 'Blue' beings are different in both frequency and cycle from the extra-terrestrials that populate underground bases in portions of the Ouachita & Ozark Mountains. As we have shared previously, the Sirian-Pleiadian-Arcturian Alliance are highly involved in the upshift of the planet, and the ongoing shift into the crystalline energies of the 144 Grid. There is indeed a base of the SPA Alliance below the region of the Talimena Ridge in Arkansas. The benevolent Star-Beings do interact with the blue peoples, but are of a different frequency and nature. The extra terrestrials are light forms, photonic in nature. There are actually several factions of the subterranean 'blue' races globally. The primary two that habituate dimensional caverns beneath the mountains of Arkansas originated from Lumania & MU (LeMuria). Both are spiritually advanced, but in separate physiologies and in different frequencial dimensional-cycles. The Lumanians are the beings encountered in imagery vision, in the caves near Cushman, termed as Blow Cave and Glass Cave. These are much taller than the blue people of LeMuria. The Lumanians are 7 to 8 feet in stature, are very thin, yet wiry. They are of a very ancient Sirian Star-Seed, and populated various portions of the planet in a very ancient, yet forgotten era, many millions of years before LeMuria & Mu. Interestingly, they appear very similar to the blue-skinned people depicted in the movie 'Avatar', but without the pointed ears or tails. Their skin is quite smooth, and their facial features are narrow, almost feline in appearance, with almond ovalesque eyes. They exist primarily as light beings and truly do not have a physical form, rather they project an image, in the rare circumstances they are encountered by humanity. In their present roles, the remaining Lumanians are in essence serving as care-takers for the planet, working closely with the sentience of the conscious Earth, termed as Gaia. The LeMurians went underground many eons after the Lumanians, and exist in greater numbers. The skin tone of the LeMurians is more of a chameleon toning. The ones beneath the lands of North America, and specifically below Arkansas are more of a teal color, and they are much shorter in height. The ones that exist in other parts of the planet, such as Peru and Brazil, are more of a reddish or rusty bronze tonation. Color is determined more by the mineral content of the water they consume. There remains a much larger populace of the 'Hollow Earth' LeMurians in the inner realms of the planet now, in comparison to the Lumanians. Not all of the LeMurians went underground, many migrated into OG (Peru-Bolivia & Ecuador), and others into the surface regions of continental Atlantis. The antediluvian LeMurians were dwarf like, in appearance, quite hairy, and spawned the mythological 'middle earth' tales of hobbits. As they went deeper into the earth, the pore pressures of the more profound subterranean hollows influenced changes in their physiology, and over time their physical density lessened and their rather bushy hair no longer grew. When the LeMurians originally arrived on the earthplane, they were pseudo physical, primarily etheric thought forms or projections. They were able to 'push' their consciousness into both biological & botanical life forms. They did not become purely physical until approximately 200,000 years ago. The LeMurians did indeed have interaction with the Atla-Ra, and assisted in the coding of crystals both in Arkansas and Brazil. At that time they populated both surface terrain and the subterranean. Although they are not of the Devic or Fae Kingdoms, which are also highly present among the crystals, they were often considered as dwarves, albeit the physical variety. Their height varied from 3 feet to 4 ½ feet in stature. They are sensitive & extremely passive in nature, and are as yet quite reluctant to have interface with humanity. These beings are purely telepathic in their communication, and are very aware of the aggressive aspects of humankind. However it should be noted, that these beings work closely with the Earth, and also with the benevolent beings of Star-Nation. They are highly compassionate, but are aware that time is not yet appropriate for intentional direct contact with humans. This will occur as humanity achieve a higher light quotient, and become more fluent & lucid in multidimensional awareness. In future channels we will share more of these beings. We will additional speak on the unique devic kingdom that is projected from conscious crystals.
Closing In closing we say again ....'As the frequency of the new earth continues to increase, so does the vibratory rate of the crystals. This is by purpose, by plan, for indeed the major crystal fields of the planet are playing major roles in the dissemination of the crystalline pattern. Indeed, the conscious mineral kingdom of the Earth is awakening to new levels of importance and influence.' It is in fact the 'Crystalline' regions of the Earth that are playing the most prominent role in the shifts and upgrades in dimensional frequency. And we assure you, logically, this benevolent influence & pineal stimulation is of great benefit for humanity. More and more of you are feeling the draw to visit crystalline energy fields, and indeed there is intuitive understanding & purpose of great significance in doing so. We assure you, Dear Human, that there is a divinely orchestrated & purposeful call within the crystalline members of the conscious mineral kingdom to increase the generation & flow of vital life force, of coded geo-energy that it receives from the crystalline core of the earth, and from the central sun. I am Metatron with Tyberonn of Crystalline Service. You are Dearly Beloved. Marvelous transitions are occurring, and you are on path. Keep your light shining! And so it is...And it is So...
The above channel is Copyright Protected © All Rights Duly Reserved 2017 to James Tyberonn & Earth-Keeper Publications
2 notes
·
View notes
Text
Machine Learning: The Magic is How it Works
New Post has been published on http://doggietrainingclasses.com/machine-learning-the-magic-is-how-it-works/
Machine Learning: The Magic is How it Works
I was talking with a friend recently about artificial intelligence (AI) and machine learning (ML), and they noted that if you replaced AI or ML with the word magic, many of those discussions would be as useful and informative as before. This is due to a number of factors, including misunderstanding about the current state of affairs when it comes to AI, ML, and more specifically, deep neural networks (DNNs)—specifically, what ML models are actually doing and not comprehending how ML models are used together.
I hope that those who have been working with ML take kindly to my explanations, because they’re targeted at engineers who want to understand and use ML but haven’t gotten through the hype that even ML companies are spouting. More than half of you are looking into ML, but only a fraction is actually incorporating it into products. This number is growing rapidly though.
ML is only a part of the AI field and many ML tools and models are available, being used now, and in development (Fig. 1). DNNs are just a part; other neural-network approaches enter into the mix, but more on that later.
1. Neural networks are just a part of the machine-learning portion of artificial-intelligence research.
Developers should look at ML models more like fast Fourier transforms (FFTs) or Kalman filters. They’re building blocks that perform a particular function well and can be combined with similar tools, modules, or models to solve a problem. The idea of stringing black boxes together is appropriate. The difference between an FFT and a DNN model is in the configuration. The former has a few parameters while DNN model needs to be trained.
Training for some types of neural networks requires thousands of samples, such as photos. This is often done in the cloud, where large amounts of storage and computation power can be applied. Trained models can then be used in the field, since they normally require less storage and computation power as their training counterparts. AI accelerators can be utilized in both instances to improve performance and reduce power requirements.
Rolling a Machine-Learning Model
Most ML models can be trained to provide different results using a different set of training samples. For example, a collection of cat photos can be used with some models to help identify cats.
Models can perform different functions such as detection, classification and segmentation. These are common chores for image-based tools. Other functions could include path optimization or anomaly detection, or provide recommendations.
A single model will not typically deliver all of the processing need in most applications, and input and output data may benefit from additional processing. For example, noise reduction may be useful for audio input to a model. The noise reduction may be provided by conventional analog or digital filters or there may be an ML model in the mix. The output could then be used to recognize phonemes, words, etc., as the data is massaged until a voice command is potentially recognized.
Likewise, a model or filter might be used to identify an area of interest in an image. This subset could then be presented to the ML-based identification subsystem and so on (Fig. 2). The level of detail will depend on the application. For example, a video-based door-opening system may need to differentiate between people and animals as well as the direction of movement so that the door only opens when a person is moving toward it.
2. Different tools or ML models can be used to identify areas of interest that are then isolated and processed to distinguish between objects such as people and cars.
Models may be custom-built and pretrained, or created and trained by a developer. Much will depend on the requirements and goals of the application. For example, keeping a machine running may mean tracking the operation of the electric motor in the system. A number of factors can be recorded and analyzed from power provided to the motor to noise and vibration information.
Companies such as H2O.ai and XNor are providing prebuilt or customized models and training for those who don’t want to start from scratch or use open-source models that may require integration and customization. H2O.ai has packages like Enterprise Steam and Enterprise Puddle that target specific platforms and services. XNor’s AI2Go uses a menu-style approach: Developers start by choosing a target platform, like a Raspberry Pi, then an industry, like automotive, and then a use case, such as In-cabin object classification. The final step is to select a model based on latency and memory footprint limitations (Fig. 3).
3. Shown is the tail end of the menu selection process for XNor’s AI2Go. Developers can narrow the search for the ideal model by specifying the memory footprint and latency time.
It’s Not All About DNNs
Developers need to keep in mind a number of factors when dealing with neural networks and similar technologies. Probability is involved and results from an ML model are typically defined in percentages. For example, a model trained to recognize cats and dogs may be able to provide a high level of confidence that an image contains a dog or a cat. The level may be lower distinguishing a dog from a cat and so on, to the point that a particular breed of animal is recognized.
The percentages can often improve with additional training, but changes usually aren’t linear. It may be easy to hit the 50% mark and 90% might be a good model. However, a lot of training time may be required to hit 99%.
The big question is: “What are the application requirements and what alternatives are there in the decision-making process?” It’s one reason why multiple sensors are used when security and safety are important design factors.
DNNs have been popular because of the availability of open-source solutions, including platforms like TensorFlow and Caffe. They have found extensive hardware and software support from the likes of Xilinx, NVIDIA, Intel, and so on, but they’re not the only types of neural-network tools available. Convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs) are some of the other options available.
SNNs are used by BrainChip and Eta Compute. BrainChip’s Akida Development Environment (ADE) is designed to support SNN model creation. Eta Compute augments its ultra-low-power Cortex-M3 microcontroller with SNN hardware. SNNs are easier to train than DNNs and their ilk, although there are tradeoffs for all neural-network approaches.
Neurala’s Lifelong-DNN (LDNN) is another ML approach that’s similar to DNNs with the lower training overhead of SNNs. LDNN is a proprietary system developed over many years. It supports continuous learning using an approximation of lightweight back propagation that allows learning to continue without the need to retain the initial training information. LDNN also requires fewer samples to reach the same level of training as a conventional DNN.
There’s a tradeoff in precision and recognition levels compared to a DNN, but such differences are similar to those involving SNNs. It’s not possible to make direct comparisons between systems because so many factors are involved, including training time, samples, etc.
LDNN can benefit for AI acceleration provided by general-purpose GPUs (GPGPUs). SNNs are even more lightweight, making them easier to use on microcontrollers. Even so, DNNs can run on microcontrollers and low-end DSPs as long as the models aren’t too demanding. Image processing may not be practical, but tracking anomalies on a motor-control system could be feasible.
Overcoming ML Challenges
There are numerous challenges when dealing with ML. For example, overfitting is a problem experienced by training-based solutions. This occurs when the models work well with data similar to the training data, but poorly on data that’s new. LDNN uses an automatic, threshold-based consolidation system that reduces redundant weight vectors and resets the weights while preserving new, valid outliers.
ML models can address many tasks successfully with high accuracy. However, that doesn’t mean all tasks, regardless if they’re conventional classification or segmentation problem, can be accommodated. Sometimes changing models can help or developing new ones. This is where data engineers can come in handy, though they tend to be rare and expensive.
Debugging models can also be a challenge. ML module debugging is much different than debugging a conventional program. Debugging models that are working within an application is another issue. Keep in mind that models will often have an accuracy less than 100%; therefore, applications need to be designed to handle these conditions. This is less of an issue for non-critical applications. However, apps like self-driving cars will require redundant, overlapping systems.
Avalanche of Advances
New systems continue to come out of academia and research facilities. For example, “Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” is a paper out of the University of Maryland’s engineering department. Anton Mitrokhin and Peter Sutor Jr., Cornelia Fermüller, and Computer Science Professor Yiannis Aloimonos developed a hyperdimensional pipeline for integrating sensor data, ML analysis, and control. It uses its own hyperdimensional memory system.
ML has been progressing like no other programming tool in the past. Improvements have been significant even without turning to specialized hardware. Part of this is due to improved software support to optimizations that increase accuracy or performance while reducing hardware requirements. The challenge for developers is determining what hardware to use, what ML tools to use, and how to combine them to address their application.
It’s worth making most systems now rather than waiting for the next improvement. Some platforms will be upward-compatible; however, others may not. Going with a hardware-accelerated solution will limit the ML models that can be supported but with significant performance gains, often multiple orders of magnitude.
Systems that employ ML aren’t magic and their application can use conventional design approaches. They do require new tools and debugging techniques, so incorporating ML for the first time shouldn’t be a task taken lightly. On the other hand, the payback can be significant and ML models may often provide support that’s unavailable with conventional programming techniques and frameworks.
As noted, a single ML model may not be what’s needed for a particular application. Combining models, filters, and other modules requires an understanding of each, so don’t assume it will simply be a matter of choosing an ML model and doing a limited amount of training. That may be adequate in some instances, especially if the application matches an existing model, but don’t count on it until you try it out.
Source link Dog Training Information
0 notes
Text
Hyperallergic: Reflections of a Conceptual Expatriate
(all images courtesy the artist for Hyperallergic)
1.
I don’t know maybe it’s my energy. I don’t know maybe if I had been good at math, or majored in those things which make you good at capitalism. Maybe if I had stayed bi. I don’t know — I don’t fit here. I don’t work here. There is no flow to my shine, reciprocity, and my motto is: Ubuntu, panhumanism, linked fate. Regardless, I had to create myself into a conceptual expatriate because the United States was closed to me. I kept trying to be where I thought I ought to be: art and academia. But what would actually happen? They would invite me out to twitch at me, talk down to me. Where was the kinship, mentors, open doors? Where’s invite, welcome? Where’s bonding and communion? Where was love? You expect my solidarity without loyalty? Because you’re shallow — and I look like I should be the loud bald Negro shouting about sports on ESPN? Why are you stunting me? Why must you force isolation and loneliness? What is the Americanness behind this? Is there any space for failure, vulnerability, uncontrolled feelings, and random conversations? No, we suffocate. Can we speak about the depth without becoming impassioned to the point of rancor and bitterness? No, it’s like the foreign artist I met who said even political art in the United States comes from a smirk. I just wanted to make experimental, abstract, esoteric, abstruse art, game-changing transcendent work of viscera, and frisson. I have masterpiece in me: can you see me?
Why am I blocked? Caveats; shadows; thresholds; vectors; speeds; careering; careering; celerity: Barriers.
I wasn’t just blocked, I was underwater. I was blocked because all I knew was what I knew, and what was told to me. It is and was deprivation you don’t even realize and then when you do all that void of terror from recognition, unfulfilled, longing that had been Eden on empty. On some level it’s puritanism. We don’t give or receive, we just take — USA. It’s a test I turned into a performance piece just for myself. The last few years I find every man. How they all come back to me: that’s the test and the art. Yes, I generalize, but systems hold up, at least from my lived experience, and one can glean the patterns.
2.
Death is a spectrum. The topology of threshold — every breakthrough another base and divide, limit and ceiling, line and maximum. Impermeable permeability. The boundary recesses like the tide but the waves remain the wall between shore and depths. And yet: Whiteness is black hole, cannibalizing, sucks up everything. Erasure, erasure, Gilliam, Whitten, Patterson, Philando, Alton,
And even then we only protest when he is a cis hetero black man — all other intersections haven’t been thought of yet because they are from the future, and we think in capitalism, basic AF.
What lies outside?
Beneath the conceptual expatriate is the need to be decolonial in action and deed. My work deals with atheism, death, competition, bonding, density, and utopia. I am here to disrupt social construct, which is often misconstrued with normativity and essentialism. I blame myself. Two liberal arts degrees and looking for a third: I studied the expatriate but never put it together for myself. That is such an American disconnect, subaltern reckoning, runaway slave. I know how centuries back Henry Tanner, the first, basically the first black American expatriate who got himself to France where it was easier and fathomable to be fully realized . I never saw the through line between him and me — but also no one told me either.
Those without this matrix don’t know how to police me, they don’t know the symbols I carry, different paradigm, different gaze – there’s loophole. Yet the shadow to that light: I am cliché. The world without has always looked to the black expatriate to explain USA, the role of Magic Negro on steroids and DMT.
3.
Yes there is nuance and no monolith, but still, I have no self. I am unselfconscious. Light, Gossamer, Eerie. I’m whatever your projection tells you. Your subjectivity is relational and interactive, codependent and inter-reliant. Your imagination > your projection! You were like, what are you?! And I was like, I am fractal, moiré, and iridescence inter/bisected by scintillating mirrors/coruscating screens, hyperdimensional, time immemorial — ∞.
I am more than social sculptor – I am social engineer. Overseer – when the canary in the mine (Cassandra / Oracle / Vanguard) stops being passive and starts being lead (beacon / Firestarter / spearhead).
I created my art practice to make myself a conceptual expatriate. Until I find a way out of USA to move overseas for good, I have used my enterprising, entrepreneurial, self-made, American can-do spirit to build a space inside of New York City that is without USA: like the United Nations, physically in New York City but technically without belonging to all nations; a small space, without the exploitation of the exotic, where a man can be a prophet in his own land. That was the goal — to build an allegorical United Nations of artists, a creative think tank, called Communitas, an art practice that creates new art from new connections with artists from around the world, across disciplines, across gazes, people from all walks of life, many professions, colors, cultures, nationalities, subjectivities, Ubuntu, linked fate. And together, we are building a global panhumanist collective influencing art, aesthetics, optimism, pacifism, iconoclasm, thought, imagination, and vanguardism.
4.
I created Communitas for myself, and for us: those who feel all of the aforementioned from deep within their interiority, the commons anew, beyond neoliberalism, beyond western matrices. What lies outside? What is our own gaze? To process the decolonial, my art and thinking navigate this idea of non- and super-human spaces. Communitas is both high philosophical art, and non-art but community think tank. We create at least two dimensions in any creative act: context and contours in process of the artist, making the new as conceptual expatriate and toward the decolonial as well as a world of the work of art, which members of an audience are invited to interpret and respond to, consequently shaping and making aspects of their own contexts and contours.
5.
My goal as a conceptual expatriate — a nonentity creating a nonspace, as if digital life were real life — was to launch a global arts movement of new ideas and vanguard works of art through a creative think tank, a quarterly downtown Manhattan salon, and an interdisciplinary arts collective. To create the next worldwide frontline of art, including from sports art to STEAM (STEM plus arts), through developing artist partnerships and relationships. Our next step, an upcoming apocalyptic sports video piece called “The Halcyon Sublime,” will esoterically and abstractly focus on the erupting shadow of the sins of colonialism and the Capitalocene. The aims of Communitas are the subtext made corporeal, question made flesh, and reconciliation of compartmentalization.
We shall see what’s next. You coming?
The post Reflections of a Conceptual Expatriate appeared first on Hyperallergic.
from Hyperallergic http://ift.tt/2lNxuRJ via IFTTT
0 notes
Quote
Brain-inspired hyperdimensional computing (HDC) is an emerging computational paradigm that has achieved success in various domains. HDC mimics brain cognition and lever-ages hyperdimensional vectors with fully distributed holographic representation and (pseudo)randomness. Compared to the traditional machine learning methods, HDC offers several critical advantages, including smaller model size, less computation cost, and one-shot learning capability, making it a promising candidate in low-power platforms. Despite the growing popularity of HDC, the robustness of HDC models has not been systematically explored. This paper presents a study on the robustness of HDC to errors in associative memory—the key component storing the class representations in HDC. We perform extensive error injection experiments to the associative memory in a number of HDC models (and datasets), sweeping the error rates and varying HDC configurations (i.e., dimension and data width). Empirically, we observe that HDC is considerably robust to errors in the associative memory, opening up opportunities for further optimizations. Further, results show that HDC robustness varies significantly with different HDC configurations such as data width. Moreover, we explore a low-cost error masking mechanism in the associative memory to enhance its robustness.
Assessing Robustness of Hyperdimensional Computing Against Errors in Associative Memory : (Invited Paper) | IEEE Conference Publication | IEEE Xplore
0 notes
Quote
The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in high-dimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.
Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors | SpringerLink
0 notes