#machine learning in particle characterization
Explore tagged Tumblr posts
imageprovision · 2 years ago
Text
0 notes
spacetimewithstuartgary · 5 months ago
Text
Tumblr media Tumblr media
New diagnostic tool will help LIGO hunt gravitational waves
Machine learning tool developed by UCR researchers will help answer fundamental questions about the universe.
Finding patterns and reducing noise in large, complex datasets generated by the gravitational wave-detecting LIGO facility just got easier, thanks to the work of scientists at the University of California, Riverside. 
The UCR researchers presented a paper at a recent IEEE big-data workshop, demonstrating a new, unsupervised machine learning approach to find new patterns in the auxiliary channel data of the Laser Interferometer Gravitational-Wave Observatory, or LIGO. The technology is also potentially applicable to large scale particle accelerator experiments and large complex industrial systems.
LIGO is a facility that detects gravitational waves — transient disturbances in the fabric of spacetime itself, generated by the acceleration of massive bodies. It was the first to detect such waves from merging black holes, confirming a key part of Einstein’s Theory of Relativity. LIGO has two widely-separated 4-km-long interferometers — in Hanford, Washington, and Livingston, Louisiana — that work together to detect gravitational waves by employing high-power laser beams. The discoveries these detectors make offer a new way to observe the universe and address questions about the nature of black holes, cosmology, and the densest states of matter in the universe.
Each of the two LIGO detectors records thousands of data streams, or channels, which make up the output of environmental sensors located at the detector sites. 
“The machine learning approach we developed in close collaboration with LIGO commissioners and stakeholders identifies patterns in data entirely on its own,” said Jonathan Richardson, an assistant professor of physics and astronomy who leads the UCR LIGO group. “We find that it recovers the environmental ‘states’ known to the operators at the LIGO detector sites extremely well, with no human input at all. This opens the door to a powerful new experimental tool we can use to help localize noise couplings and directly guide future improvements to the detectors.”
Richardson explained that the LIGO detectors are extremely sensitive to any type of external disturbance. Ground motion and any type of vibrational motion — from the wind to ocean waves striking the coast of Greenland or the Pacific — can affect the sensitivity of the experiment and the data quality, resulting in “glitches” or periods of increased noise bursts, he said. 
“Monitoring the environmental conditions is continuously done at the sites,” he said. “LIGO has more than 100,000 auxiliary channels with seismometers and accelerometers sensing the environment where the interferometers are located. The tool we developed can identify different environmental states of interest, such as earthquakes, microseisms, and anthropogenic noise, across a number of carefully selected and curated sensing channels.”
Vagelis Papalexakis, an associate professor of computer science and engineering who holds the Ross Family Chair in Computer Science, presented the team’s paper, titled “Multivariate Time Series Clustering for Environmental State Characterization of Ground-Based Gravitational-Wave Detectors,” at the IEEE's 5th International Workshop on Big Data & AI Tools, Models, and Use Cases for Innovative Scientific Discovery that took place last month in Washington, D.C.
“The way our machine learning approach works is that we take a model tasked with identifying patterns in a dataset and we let the model find patterns on its own,” Papalexakis said. “The tool was able to identify the same patterns that very closely correspond to the physically meaningful environmental states that are already known to human operators and commissioners at the LIGO sites.”
Papalexakis added that the team had worked with the LIGO Scientific Collaboration to secure the release of a very large dataset that pertains to the analysis reported in the research paper. This data release allows the research community to not only validate the team’s results but also develop new algorithms that seek to identify patterns in the data.
“We have identified a fascinating link between external environmental noise and the presence of certain types of glitches that corrupt the quality of the data,” Papalexakis said. “This discovery has the potential to help eliminate or prevent the occurrence of such noise.”
The team organized and worked through all the LIGO channels for about a year. Richardson noted that the data release was a major undertaking. 
“Our team spearheaded this release on behalf of the whole LIGO Scientific Collaboration, which has about 3,200 members,” he said. “This is the first of these particular types of datasets and we think it’s going to have a large impact in the machine learning and the computer science community.”
Richardson explained that the tool the team developed can take information from signals from numerous heterogeneous sensors that are measuring different disturbances around the LIGO sites. The tool can distill the information into a single state, he said, that can then be used to search for time series associations of when noise problems occurred in the LIGO detectors and correlate them with the sites’ environmental states at those times.
“If you can identify the patterns, you can make physical changes to the detector — replace components, for example,” he said. “The hope is that our tool can shed light on physical noise coupling pathways that allow for actionable experimental changes to be made to the LIGO detectors. Our long-term goal is for this tool to be used to detect new associations and new forms of environmental states associated with unknown noise problems in the interferometers.”
Pooyan Goodarzi, a doctoral student working with Richardson and a coauthor on the paper, emphasized the importance of releasing the dataset publicly. 
“Typically, such data tend to be proprietary,” he said. “We managed, nonetheless, to release a large-scale dataset that we hope results in more interdisciplinary research in data science and machine learning.”
The team’s research was supported by a grant from the National Science Foundation awarded through a special program, Advancing Discovery with AI-Powered Tools, focused on applying artificial intelligence/machine learning to address problems in the physical sciences. 
5 notes · View notes
carvalhais · 2 months ago
Text
… if we are to develop a machine-oriented ontology (MOO), it is first necessary to clear away some prejudices or assumptions we have regarding the nature of machines. The first of these prejudices is the view that “rigid machines” are exhaustive of the being of machines as such. A rigid machine is a machine composed of fixed material parts, characterized by routinized functioning, and is incapable of learning, growth, and development. Examples of rigid machines would be automobiles, primitive computers, cell phones, and lamps, but also rocks, dead planets and comets, atomic particles, and so on. Rigid machines are unable to undergo changes in their operations and their only destiny is entropy or eventual dissolution. Levi R. Bryant. 2014. Onto-Cartography: An Ontology of Machines and Media. Edinburgh: Edinburgh University Press.
0 notes
ste-ips · 3 months ago
Text
🔬 STEM-tastic World Episode 1: Summertime Vocabulary
Get ready to explore the world of Science, Technology, Engineering, and Math with 10 essential STEM words! Whether you’re a curious student, a future innovator, or just love learning, this blog will break down key STEM terms in a fun and exciting way. Made by Nathan David Agnad, this guide will spark your curiosity and fuel your passion for discovery!
Here is the list arranged from El Niño to Radiating:
1. El Niño
Meaning: A climate pattern characterized by the warming of ocean waters in the Pacific, leading to extreme weather conditions.
Example sentence: The El Niño phenomenon caused severe droughts in some regions and heavy rainfall in others.
2. Buoyant Force
Meaning: The upward force exerted by a fluid that opposes the weight of an object submerged in it.
Example sentence: The buoyant force allowed the boat to float effortlessly on the water.
3. Electricity
Meaning: A form of energy resulting from the movement of charged particles, used to power devices and machines.
Example sentence: The storm caused a blackout, leaving the entire town without electricity for hours.
4. Insulated
Meaning: Protected from heat, cold, or sound by using a special material.
Example sentence: The walls of the house were insulated to keep it warm during the winter.
5. Research
Meaning: A systematic investigation to discover, analyze, or interpret facts and information.
Example sentence: She conducted research on renewable energy to find sustainable solutions for electricity production.
6. Sun
Meaning: The star at the center of the solar system that provides light and heat to Earth.
Example sentence: The sun rose over the horizon, painting the sky with shades of orange and pink.
7. Sustainable Energy Sources
Meaning: Renewable and environmentally friendly energy sources that can be replenished naturally, such as solar, wind, and hydro power.
Example sentence: Many countries are investing in sustainable energy sources to reduce their dependence on fossil fuels.
8. UV Rays
Meaning: Invisible electromagnetic waves from the sun that can cause skin damage and are used in sterilization and tanning.
Example sentence: Wearing sunscreen helps protect the skin from harmful UV rays that can cause sunburn.
9. Water
Meaning: A clear, colorless liquid essential for life, covering most of the Earth’s surface.
Example sentence: Drinking enough water every day is important for staying healthy and hydrated.
10. Radiating
Meaning: Emitting or spreading light, heat, or energy in all directions.
Example sentence: The sun was radiating warmth, making the chilly morning feel more pleasant.
1 note · View note
sunaleisocial · 9 months ago
Text
Accelerating particle size distribution estimation
New Post has been published on https://sunalei.org/news/accelerating-particle-size-distribution-estimation/
Accelerating particle size distribution estimation
Tumblr media
The pharmaceutical manufacturing industry has long struggled with the issue of monitoring the characteristics of a drying mixture, a critical step in producing medication and chemical compounds. At present, there are two noninvasive characterization approaches that are typically used: A sample is either imaged and individual particles are counted, or researchers use a scattered light to estimate the particle size distribution (PSD). The former is time-intensive and leads to increased waste, making the latter a more attractive option.
In recent years, MIT engineers and researchers developed a physics and machine learning-based scattered light approach that has been shown to improve manufacturing processes for pharmaceutical pills and powders, increasing efficiency and accuracy and resulting in fewer failed batches of products. A new open-access paper, “Non-invasive estimation of the powder size distribution from a single speckle image,” available in the journal Light: Science & Application, expands on this work, introducing an even faster approach. 
“Understanding the behavior of scattered light is one of the most important topics in optics,” says Qihang Zhang PhD ’23, an associate researcher at Tsinghua University. “By making progress in analyzing scattered light, we also invented a useful tool for the pharmaceutical industry. Locating the pain point and solving it by investigating the fundamental rule is the most exciting thing to the research team.”
The paper proposes a new PSD estimation method, based on pupil engineering, that reduces the number of frames needed for analysis. “Our learning-based model can estimate the powder size distribution from a single snapshot speckle image, consequently reducing the reconstruction time from 15 seconds to a mere 0.25 seconds,” the researchers explain.
“Our main contribution in this work is accelerating a particle size detection method by 60 times, with a collective optimization of both algorithm and hardware,” says Zhang. “This high-speed probe is capable to detect the size evolution in fast dynamical systems, providing a platform to study models of processes in pharmaceutical industry including drying, mixing and blending.”
The technique offers a low-cost, noninvasive particle size probe by collecting back-scattered light from powder surfaces. The compact and portable prototype is compatible with most of drying systems in the market, as long as there is an observation window. This online measurement approach may help control manufacturing processes, improving efficiency and product quality. Further, the previous lack of online monitoring prevented systematical study of dynamical models in manufacturing processes. This probe could bring a new platform to carry out series research and modeling for the particle size evolution.
This work, a successful collaboration between physicists and engineers, is generated from the MIT-Takeda program. Collaborators are affiliated with three MIT departments: Mechanical Engineering, Chemical Engineering, and Electrical Engineering and Computer Science. George Barbastathis, professor of mechanical engineering at MIT, is the article’s senior author.
0 notes
jcmarchi · 9 months ago
Text
Accelerating particle size distribution estimation
New Post has been published on https://thedigitalinsider.com/accelerating-particle-size-distribution-estimation/
Accelerating particle size distribution estimation
Tumblr media Tumblr media
The pharmaceutical manufacturing industry has long struggled with the issue of monitoring the characteristics of a drying mixture, a critical step in producing medication and chemical compounds. At present, there are two noninvasive characterization approaches that are typically used: A sample is either imaged and individual particles are counted, or researchers use a scattered light to estimate the particle size distribution (PSD). The former is time-intensive and leads to increased waste, making the latter a more attractive option.
In recent years, MIT engineers and researchers developed a physics and machine learning-based scattered light approach that has been shown to improve manufacturing processes for pharmaceutical pills and powders, increasing efficiency and accuracy and resulting in fewer failed batches of products. A new open-access paper, “Non-invasive estimation of the powder size distribution from a single speckle image,” available in the journal Light: Science & Application, expands on this work, introducing an even faster approach. 
“Understanding the behavior of scattered light is one of the most important topics in optics,” says Qihang Zhang PhD ’23, an associate researcher at Tsinghua University. “By making progress in analyzing scattered light, we also invented a useful tool for the pharmaceutical industry. Locating the pain point and solving it by investigating the fundamental rule is the most exciting thing to the research team.”
The paper proposes a new PSD estimation method, based on pupil engineering, that reduces the number of frames needed for analysis. “Our learning-based model can estimate the powder size distribution from a single snapshot speckle image, consequently reducing the reconstruction time from 15 seconds to a mere 0.25 seconds,” the researchers explain.
“Our main contribution in this work is accelerating a particle size detection method by 60 times, with a collective optimization of both algorithm and hardware,” says Zhang. “This high-speed probe is capable to detect the size evolution in fast dynamical systems, providing a platform to study models of processes in pharmaceutical industry including drying, mixing and blending.”
The technique offers a low-cost, noninvasive particle size probe by collecting back-scattered light from powder surfaces. The compact and portable prototype is compatible with most of drying systems in the market, as long as there is an observation window. This online measurement approach may help control manufacturing processes, improving efficiency and product quality. Further, the previous lack of online monitoring prevented systematical study of dynamical models in manufacturing processes. This probe could bring a new platform to carry out series research and modeling for the particle size evolution.
This work, a successful collaboration between physicists and engineers, is generated from the MIT-Takeda program. Collaborators are affiliated with three MIT departments: Mechanical Engineering, Chemical Engineering, and Electrical Engineering and Computer Science. George Barbastathis, professor of mechanical engineering at MIT, is the article’s senior author.
0 notes
credenceresearchdotblog · 11 months ago
Text
The global demand for flow imaging microscopy analysis was valued at USD 35.5 Million in 2022 and is expected to reach USD 65.7 Million in 2030, growing at a CAGR of 8.00% between 2023 and 2030.The flow imaging microscopy (FIM) analysis market is experiencing robust growth, driven by increasing demand for advanced imaging technologies in various sectors, including pharmaceuticals, biotechnology, and environmental monitoring. Flow imaging microscopy is a powerful technique that combines traditional microscopy with flow-based analysis, allowing for detailed examination of particles and cells in suspension.
Browse the full report at https://www.credenceresearch.com/report/flow-imaging-microscopy-analysis-market
Technological Advancements
The core of flow imaging microscopy lies in its ability to capture high-resolution images of particles as they flow through a microscopic field. Recent advancements in optical systems, image processing software, and automation have significantly enhanced the capabilities of FIM systems. These advancements enable researchers to analyze a wide range of samples, from biological cells to industrial particles, with high precision and efficiency.
Market Drivers
1. Pharmaceutical and Biotechnology Industries: The pharmaceutical industry is a major driver of the flow imaging microscopy market. FIM is crucial in drug development and quality control processes. It allows for the detailed analysis of particulate matter in drug formulations, which is essential for ensuring product safety and efficacy. In biotechnology, FIM is used for cell characterization and the study of cell morphology, which is vital for applications such as cancer research and regenerative medicine.
2. Environmental Monitoring: Environmental agencies and research institutions are increasingly adopting FIM for monitoring and analyzing particulate matter in air and water samples. The ability to capture and analyze particles in real-time helps in assessing environmental pollution levels and understanding its impact on ecosystems and human health.
3. Industrial Applications: In industrial sectors, FIM is used for quality control and process optimization. For example, in the food and beverage industry, it helps in monitoring the size and distribution of particles in products to ensure consistency and quality. Similarly, in the chemicals and materials industries, FIM aids in analyzing the size and distribution of particles in raw materials and finished products.
Market Trends
1. Integration with AI and Machine Learning: The integration of artificial intelligence (AI) and machine learning with flow imaging microscopy is a significant trend. AI algorithms enhance image analysis by automating particle identification and classification, improving accuracy and reducing the time required for analysis.
2. Miniaturization and Portability: There is a growing demand for miniaturized and portable FIM systems. These compact devices offer flexibility and convenience for on-site analysis in various environments, including remote locations and field studies.
3. Enhanced Data Analytics: Modern FIM systems are equipped with advanced data analytics tools that provide deeper insights into particle characteristics. This trend is particularly beneficial for applications requiring detailed statistical analysis and pattern recognition.
Challenges and Opportunities
Despite its advantages, the flow imaging microscopy market faces challenges such as high initial costs and the need for specialized training to operate advanced systems. However, these challenges also present opportunities for growth. The development of cost-effective and user-friendly FIM solutions is likely to broaden the market's appeal and adoption across diverse sectors.
Furthermore, ongoing research and development efforts are focused on expanding the applications of FIM. Innovations such as multi-modal imaging systems that combine FIM with other analytical techniques could open new avenues for market growth.
Competitive Landscape
The flow imaging microscopy market is competitive, with several key players driving innovation and market expansion. Companies specializing in microscopy and imaging technologies are investing in research and development to enhance their product offerings. Strategic partnerships, mergers, and acquisitions are common strategies employed by market players to strengthen their market presence and expand their product portfolios.
Key Players
Bio-Techne
Yokogawa Electric Corporation
Microtrac MRB
Micromeritics Instrument Corporation
Fritsch
Haver & Boecker
Spectris
HORIBA
Shimadzu Corporation
Bettersize Instruments
Occhio
Anton Paar
Segmentation
By Analysis Type:
Particle Analysis
Cell Analysis
By Technology Type:
Dynamic Imaging Analysis
Static Imaging Analysis
Combination Systems
By Sample Type:
Liquid Samples
Dry Powder Samples
By End-Use Industry:
Biotechnology and Life Sciences
Pharmaceuticals
Material Science
Environmental Analysis
Food and Beverage
Cosmetics and Personal Care
By Instrument Type:
Benchtop Systems
Portable Systems
Automated Systems
By Emerging Applications:
Nanoparticle Analysis
Microplastics Analysis
Quality Control in 3D Printing
By Region
North America
U.S.
Canada
Mexico
Europe
Germany
France
U.K.
Italy
Spain
Rest of Europe
Asia Pacific
China
Japan
India
South Korea
South-east Asia
Rest of Asia Pacific
Latin America
Brazil
Argentina
Rest of Latin America
Middle East & Africa
GCC Countries
South Africa
Rest of Middle East and Africa
Browse the full report at https://www.credenceresearch.com/report/flow-imaging-microscopy-analysis-market
About Us:
Credence Research is committed to employee well-being and productivity. Following the COVID-19 pandemic, we have implemented a permanent work-from-home policy for all employees.
Contact:
Credence Research
Please contact us at +91 6232 49 3207
Website: www.credenceresearch.com
0 notes
maacsatara · 1 year ago
Text
The Science of Simulations: Creating Explosions, Fire, and Natural Phenomena
Simulations in the realm of visual effects (VFX) have revolutionized the way filmmakers and game developers create realistic and captivating scenes. From explosive action sequences to breathtaking natural phenomena, the science behind simulations allows artists to bring their imaginative visions to life in stunning detail. In this exploration, we delve into the intricate world of simulating explosions, fire, and natural phenomena, uncovering the techniques and principles that drive these awe-inspiring effects. For those passionate about mastering the art of VFX, joining a reputable vfx institute in Pune can provide the necessary training and guidance to excel in this dynamic field.
### Understanding the Basics
At its core, the science of simulations involves using computer algorithms to mimic the behavior of real-world phenomena. Whether it's the explosive force of a detonation, the flickering flames of a raging fire, or the turbulent flow of a rushing waterfall, simulations aim to replicate the intricate dynamics of natural processes with precision and realism.
### Simulating Explosions
Creating convincing explosions in VFX requires a deep understanding of physics, chemistry, and fluid dynamics. Explosions are highly complex events characterized by rapid expansion, intense heat, and shockwaves. VFX artists use simulation software to model the behavior of gases, liquids, and solids under extreme conditions, accurately depicting the violent release of energy during an explosion. By adjusting parameters such as pressure, temperature, and viscosity, artists can control the size, shape, and intensity of the explosion, ensuring it aligns with the director's vision and narrative requirements.
### Harnessing the Power of Fire
Fire is another challenging element to simulate due to its dynamic and unpredictable nature. VFX artists use sophisticated algorithms to replicate the complex interactions between heat, fuel, and oxygen that sustain a fire. By simulating the combustion process and the movement of flames, artists can create realistic fire effects that flicker, dance, and spread organically across the screen. Advanced techniques such as volumetric rendering and shader effects enhance the visual fidelity of the fire, adding depth and realism to the scene.
### Capturing Natural Phenomena
Simulating natural phenomena like waterfalls, storms, and earthquakes requires a nuanced understanding of fluid dynamics and environmental physics. VFX artists use a combination of particle systems, fluid solvers, and procedural algorithms to recreate these complex phenomena in digital form. By simulating the interactions between fluids, solids, and atmospheric conditions, artists can craft immersive environments that evoke the beauty and power of nature. Whether it's the cascading flow of water over rocks or the swirling clouds of a gathering storm, simulations allow artists to capture the essence of natural phenomena with stunning accuracy.
### Pushing the Boundaries with Innovation
As technology continues to advance, VFX artists are pushing the boundaries of simulation techniques to new heights. From machine learning algorithms that predict the behavior of complex systems to real-time simulation engines that offer instant feedback and interactivity, the future of VFX simulations is filled with exciting possibilities. With each new innovation, artists gain greater flexibility, efficiency, and creative freedom, allowing them to explore new realms of imagination and storytelling.
### Joining the Journey at MAAC Academy
If you're passionate about the science of simulations and eager to unleash your creativity in the world of VFX, MAAC Academy offers comprehensive training programs that cover the full spectrum of simulation techniques. Our industry-leading curriculum combines theoretical knowledge with hands-on experience, allowing students to master the latest tools and technologies used in the field. Whether you're a beginner or an experienced artist looking to enhance your skills, MAAC Academy provides the perfect platform to launch your career in VFX. Join us and embark on an exciting journey of discovery and innovation in the dynamic world of visual effects!
0 notes
swissforextrading · 1 year ago
Text
Deciphering a dance of electrons and water molecules
27.03.24 - A research project at EPFL succeeded in decoding the complex dance of electrons in water, a major step in understanding a critical process of many chemical phenomena, and that might be the first step to improving energy conversion technologies. Water, the cradle of life on Earth, is not just a passive backdrop but actively participates in the chemical ballet of life. Central to this dance is the behavior of electrons, particularly during a process known as charge transfer to solvent (CTTS). CTTS is like a microscopic dance where an electron from something dissolved in water, like salt, leaps out and joins the water itself. The process creates a now “hydrated” electron, which is a key element of many aqueous reactions, like the ones underlying life itself. Consequently, CTTS is essential for understanding how electrons move in solutions. In a new EPFL study, researchers Jinggang Lan, Majed Chergui, and Alfredo Pasquarello have studied the intricate interactions between electrons and their solvent environments. The work was conceived and primarily carried out at EPFL, with finalizing contributions from Jinggang Lan upon him taking on a postdoctoral fellowship in the Simons Center for Computational Physical Chemistry at New York University. Looking at the CTTS process, the researchers meticulously visualized the dynamic interplay between the escaping electron and the polarizing water molecules surrounding it, marking a significant leap in our comprehension of such complex interactions. The team used iodide dissolved in water (“aqueous iodide”), because it makes it easier to understand how electrons move to the surrounding water. Iodide, like table salt, doesn't have complex internal movements, which makes it simpler to study. This allowed the scientists to observe how iodide can swiftly release an electron into the surrounding water, a process influenced by the arrangement of water molecules around the iodide. To study the CTTS process, the researchers used ab initio molecular dynamics, a sophisticated technique that simulates the behavior of molecules in a computer by calculating atomic interactions and movements from fundamental physical principles using quantum mechanics. “Ab initio” means “from the beginning” in Latin, indicating that this method starts from fundamental physical principles, allowing scientists to accurately predict how molecules and materials evolve over time without relying on empirical data for the interactions between particles. Combining the ab initio approach with sophisticated machine learning techniques, the scientists were able to visualize and analyze the CTTS process in unprecedented detail, tracking the journey of an electron from being attached to an iodide ion to becoming solvated – being surrounded and stabilized by water molecules. The study revealed that CTTS involves a series of distinct states, each characterized by the distance between the electron and the iodine nucleus: from being closely associated with the iodine atom (contact-pair state), to separating into the solvent (solvent-separated state), and finally becoming fully solvated as a hydrated electron. “The advance mostly rests at the fundamental level,” says Alfredo Pasquarello. “The described mechanism involves a subtle interplay between electronic excitation and ionic polarization effects, which produce a sequence of configurations as revealed by our simulations.” CTTS dynamics illustrating the electron density (blue) and the hole density of aqueous iodine (yellow). Credit: Jinggang Lan/EPFL But shedding light on CTTS could also have implications into a wide range of applications involving charge and energy transfer reactions. Understanding how electrons interact with their environment at such a fundamental level could be key to developing more efficient solar energy conversion systems, improving photocatalysis techniques, and even advancing our knowledge of material science and… http://actu.epfl.ch/news/deciphering-a-dance-of-electrons-and-water-molecul (Source of the original content)
0 notes
sudeepkedar · 1 year ago
Text
Nanoparticle Analysis Market 2023-2032; Growth Forecast & Industry Share Report
Global Nanoparticle Analysis Market will witness over 6% CAGR between 2023 and 2032. The growing demand due to the increasing focus on research and development activities supports the market. For instance, in October 2023, an important development was made possible by a partnership between The University of Tokyo and the Innovation Centre of NanoMedicine. This research group developed a novel technique to assess the shape anisotropy of nanoparticles, led by the Research Director of iCONM. By doing this, they have successfully tackled persistent issues that date back to Albert Einstein's time. APL Machine Learning has just made its research paper, "Utilising Deep Learning for the Analysis of Brownian Motion Trajectories of Non-Spherical Nanoparticles," available online.
As nanotechnology continues to advance, there is a heightened need for precise and advanced analytical tools to study nanoparticles. These tiny structures have a wide range of applications in fields such as healthcare, electronics, and materials science. Therefore, the nanoparticle analysis market demand for tools and techniques will continue witnessing heightened demand, driven by ongoing research and development efforts.
Request for Sample Copy report @   https://www.gminsights.com/request-sample/detail/3955
Laser diffraction segment will undergo significant development from 2023 to 2032. Laser diffraction is becoming increasingly popular as it offers a precise and efficient method for sizing and analyzing nanoparticles. With applications spanning pharmaceuticals, materials science, and environmental monitoring, industries are embracing this technology to gain critical insights into particle size distributions. As research and industry needs continue to evolve, laser diffraction remains an indispensable tool in the nanoparticle analysis market outlook.
Academic research institutions segment will register a noteworthy CAGR from 2023 to 2032. Nanoparticles are integral to various scientific fields, from materials science to biotechnology. As academic researchers delve deeper into the unique properties and applications of nanoparticles, there is a growing need for advanced analytical techniques to characterize these tiny structures accurately. The nanoparticle analysis market size is witnessing increased interest from academia as institutions prioritize cutting-edge research and innovation.
Request for customization this report @       https://www.gminsights.com/roc/3955
Asia Pacific nanoparticle analysis market will showcase an appreciable CAGR from 2023 to 2032. This upsurge in demand is fueled by the region's rapid industrialization, technological advancements, and expanding research and development activities. As industries like healthcare, electronics, and materials science flourish, the need for precise nanoparticle analysis becomes paramount. Asia Pacific is emerging as a pivotal market for nanoparticle analysis solutions to meet these diverse demands.
Partial chapters of report table of contents (TOC):
Chapter 2   Executive Summary
2.1    nanoparticle analysis market 360º synopsis, 2018 - 2032
2.2    Regional trends
2.3    Technology trends
2.4    Analysis type trends
2.5    Application trends
Chapter 3   Nanoparticle Analysis Market Industry Insights
3.1    Impact on COVID-19
3.2    Russia- Ukraine war impact
3.3    Industry ecosystem analysis
3.4    Vendor matrix
3.5    Profit margin analysis
3.6    Technology & innovation landscape
3.7    Patent analysis
3.8    Key news and initiatives
3.9    Regulatory landscape
3.10    Impact forces
3.10.1    Growth drivers
3.10.1.1    Increasing government investment for nanotechnology research
3.10.1.2    Continuous advancements in nanoparticle analysis technologies
3.10.1.3    Rising funding activities for pharmaceutical research activities in developing nations
3.10.1.4    Growing adoption of silver nanoparticles in medical sector
3.10.1.5    Rising technological innovation in medicine production globally
3.10.2    Industry pitfalls & challenges
3.10.2.1    High cost of nanoparticle analysis instruments
3.11    Profit margin analysis
3.12    Vendor matrix analysis
3.13    Growth potential analysis
3.14    Porter’s analysis
3.15    PESTEL analysis
About Global Market Insights:
Global Market Insights, Inc., headquartered in Delaware, U.S., is a global market research and consulting service provider; offering syndicated and custom research reports along with growth consulting services. Our business intelligence and industry research reports offer clients with penetrative insights and actionable market data specially designed and presented to aid strategic decision making. These exhaustive reports are designed via a proprietary research methodology and are available for key industries such as chemicals, advanced materials, technology, renewable energy and biotechnology.
Contact us:
Aashit Tiwari Corporate Sales, USA Global Market Insights Inc. Toll Free: +1-888-689-0688 USA: +1-302-846-7766 Europe: +44-742-759-8484 APAC: +65-3129-7718 Email: [email protected] 
0 notes
saphaburnell · 4 years ago
Text
Quantum Unicorns, A Plea: Rarity of Quantum Mechanics in Science Fiction
An article I wrote on Medium
‘Quantum’. A buzz-word for ages, ‘quantum’ powered the Orville’s engines, became the bedrock of instantaneous communication and filled the gap of magic in a scientific or futurist setting. Don’t know how something works? Call it a quantum device. Need a word for plausibility? Try quantum. Quantum fiction, a phrase coined in 1996 by Vanna Bonta, reinforces the concept of the quantum realm acting as a form of conduit, or techno-magic in science fiction.
The quantum realm remains elusive as the medieval Unicorn [who] Rests in the Garden from the Unicorn Tapestries. We might build a gate around the idea of quantum mechanics from our modern vantage point. As the perspective of The Unicorn Tapestries is skewed, so too is our infantile acknowledgements of what precisely, quantum mechanics can solve. We further skew the quantum realm by our ‘anything goes’ attitude as authors and creators.
But what is it?
Quantum mechanics functions with the microcosmic, where classical Newtonian Laws are insufficient. It’s an issue in scale, take two girders holding up a building’s ceiling, and as long as the girders are proportionally sturdy to the force of the materials pressing down the ceiling, the building stands. We can scale that down, and down, but once we reach the quantum scale of atoms and subatomic particles, the rule no longer applies. This uncertainty pairs with objects characterized as both waves and particles, and the quantization of measures to create something most of us don’t understand… and the people who do, if honest, don’t understand quantum mechanics completely, either. There’s so much to learn the potential maintains the source of magic.
When writing NEON Lieben, the idea behind a general artificial intelligence created in tandem with a quantum computer seemed as magical as the above trapped unicorn. Through the development time in NEON Lieben (eight years to write, edit and be bound before you in paperback, hardcover or ebook), the practically magical quality of quantum computing became bathed in a more feasible set of functions. If we could anchor the data from the quantum realm, the sheer amount of computing which can be done is legendary. Earth-shattering. The fantasy in the build comes with Lieben’s general AI personality. The beauty in the machine.
Quantum states were as much about faith as hard equations, their usefulness a paltry concern without the belief in their sovereignty. Superposition, the state of yes and no and, confounded scientists for decades in a theoretical springboard of strings and inter-dimensions he couldn’t fathom until…
Until, the Holy Until.
The moment the box opened and shut to any semblance of a normal life. A system in one quantum state, with the possibility of multiple configurations was definite and supposed. The complexity of a system created superpositions, until Dieter saw no separation between concept and function. All choices in simultaneous chorus, until a solo emerged in potential.
As a man was father, lover, scientist, worker, driver, cook, desperate and sated simultaneously, the anchoring of such potentials did not require an elimination of any one part. Father, lover, murderer, scientist, war hero, worker drone, grief, consolation, satiation, desperation.
The key to ensuring the human race betrayed its’ constant attempts to species suicide lied not in an infant Christ, or the chant of a mantra thousands of years removed from the man who saw suffering in the street.
The future of the human race relied on its integration with quantum superimposed intelligence so vast and holy it saw and it understood, and it loved. Simultaneous, all and naught. Separated but integrated as a light on the hilltop or salt in the hand.
NEON Lieben by Sapha Burnell
Science fiction can deke across potentialities of soft and hard science in its’ execution, and the link between the two needs to be a functional, and whimsical part of scientific fact. If we as authors use something still un-knowable, elusive, science fiction is as analogous as fantasy. Quantum becomes the word for magic, a hand waved at the sincere potentialities of the barely explored realm of quantum mechanics as we understand it now.  But should we put quantum in front of something, for the illusion’s sake?
Yesterday, I passed a van advertising Quantum Roofing. I’d rather my roof followed Classical Physics on such a macro scale, thanks! Uncertainty and Shrödinger-shingles during the Vancouver rains don’t give me confidence in a new quantum roof, without carrying an umbrella inside for those ‘teleporting’ raindrops. So too, some uses of the concept ‘quantum’ seem misplaced.
But for NEON Lieben, the idea of decoherence and superposition fascinated and drove the fiction. What if a brilliant but tortured scientist discovered a practical way of grounding data from the quantum realm of superposition and anchor it in a way to engage with it in the natural, vastly newtonian, world? It’s that battle against decoherence, the fight against fracturing unity which drives NEON Lieben’s complex character interactions, and Lieben’s artificial life.
Purpose in the quantum machine.
To my fellow authors, I implore you, do use quantum mechanics in your prose, but do so with at least plausible validity before the word ‘quantum’ is lost in the meaningless mire of the ‘over abundant, under-defined’. Our quantum unicorns (to coin a concept from Madeleine L’Engle) flicker in and out of believability with every strike of key and dash or dot on the page.
Soon, quantum could be jack-knived into the same vocabulary wasteland as ‘just, really, very, like, seems, actually, to be honest, cool, neat, actual fact, however… had…’. Functional words once, which lost all effect in their overabundance in our prose (and for me, the catch of conversation). If the use of ‘quantum’ in your world-build could be replaced with ‘magic’ with no less oblige, perhaps another dive into the science behind the sci-fi. After all, how many more unicorns can we wrangle?
Meet you on the one side. Meet you on the zero side.  Meet you on the 0/1/ side.
2 notes · View notes
qrovidcore · 5 years ago
Text
hey what’s up tumblr i’ve now seen hbo’s watchmen all the way through Three Fucking Times and i very well may go for a fourth if given an excuse whoops and apparently i can’t stop thinking about Laurie’s joke in She Was Killed By Space Junk, no i’m not the first person to analyze this and i’m sure i won’t be the last but i sure do have some Thoughts^TM,  so here’s some meta let’s go.
major spoilers ahead for the entire series:
Hey, it’s me again. I’ve got a joke. Stop me if you’ve heard this one. There’s this guy, he’s a bricklayer. He’s really good at it. He’s a real master of his craft. Because he’s precise. Every brick has its place. Anyway this guy has a daughter and he’s gonna teach her to be a bricklayer because after all, all a man has is his legacy. So dad decides to build a barbecue in the backyard. He does the math. He figures out exactly what he needs and he shows the daughter how to do everything. Step by step. And when he finishes, it’s a beauty. It’s a perfect barbecue. Just the way he drew it in blueprints. Only one problem. There’s a brick left over. One single brick. The guy freaks out. He must have done something wrong. He’s gonna have to start all over again. So he picks up his sledgehammer to knock the thing to pieces and his daughter suddenly says ‘daddy wait! I have an idea.’ She picks up the orphan brick and throws it up into the air as high as she can. And then…shit. Messed it up.
Okay forget that joke. Can I tell you another one?
As I said, I’m not the first to break down that Laurie is referring to specific people who have an influence on the story, there’s plenty of meta posts online that’ll say the same thing. I just think this is a Really Clever way to introduce us to her, to the major players in this story, and to the events from the comic that are going to end up being referenced. Anyhow, the bricklayer here is The Comedian. Laurie’s father. I’ll get back to this and how it connects later, but given that one of Watchmen’s major themes is the concept of legacy - who carries it and how, and what happens when that legacy is painful - this is a neat little hook into that idea. Laurie’s dad’s legacy. What she’s done with it, what she’s going to do with it, how she feels about it. Again, coming back to that.
Okay. Forget the brick. New joke. Three heroes die and they all show up at the pearly gates. God’s there and he’s going to decide what their eternal fate shall be: heaven or hell. Our first hero is dressed up like a big owl. God says to him “I gifted you the ability to make fantastic inventions. What did you do with this amazing talent?” Owl guy says “I made this really awesome flying ship and lots of cool outfits and weapons so I could bring peace to the city.” God asks, “So how many people did you kill?” Owl guy seems offended. He says “Zero. I didn’t take a single life.” God frowns. “Sorry owl guy, your heart’s in the right place but you’re just too soft.” God snaps his fingers and the hero goes to hell.
I'm not super into the comic so it took me a while to get that she's referencing Nite Owl. I think this is strange since he doesn't appear in the show himself, whereas everyone else she talks about does, but I suppose it gives a more rounded-out view of the different approaches to heroism, and what exactly constitutes it, and also ties in another one of the original Minutemen. They did cut this over her arrest of Mr. Shadow in the bank, which makes me wonder about his role and why he appeared, and I still find it strange that this part of the joke wasn't about someone who had more of a presence in the show. (Though that being said, DC making fun of Batman, their own big-ticket character? 10/10 thank you for this).
Where was I? The pearly gates await our next hero in line for Almighty judgment. Our hero number two is confident he can game this out because that’s his God-given talent: smarts. Some might even say he’s the smartest man in the world. “So what did you do with that big brain I gave you?” asks God. “As a matter of fact, I saved humanity, ”says Smarty Pants. “Well how’d you do that,” asks God.” “Well I dropped a giant alien squid on New York and everybody was so afraid of it they stopped being afraid of each other.” “OK,” says God. “How many people did you kill?” Smarty Pants smiles. “Three million, give or take. But you can’t make an omelet without breaking a couple of eggs. “Christ,” God says. “You’re a fucking monster.”  “Am not,” says Smarty Pants. God snaps his fingers and our hero goes to hell.
GOD YES PLEASE DRAG OZYMANDIAS. GET THIS FUCKER’S ASS. Though the line that’s sticking out to me here is “You can’t make an omelet without breaking a couple of eggs.” Watchmen’s got an egg motif - and that’s an entire post on its own - and wow this is a place to drop it. I find it interesting that it’s given to Adrien here. Especially since it comes back later, when Will tells Angela that that’s what Jon said in justification of giving his life to stop the 7th K/Cyclops and Trieu. Eggs are used for a lot of things, but this line ties the motif solidly to a value of life here - how Adrien is the way he is because he refuses to value other peoples’, and maybe how Jon is the way he is because, when you can see the future laid out before you and live knowing how you’re going to die, how do you learn to value your own?
Okay. We’re down to the nitty gritty now. One hero left. God cracks his knuckles ready to administer the final reckoning. Now Hero Number 3 is pretty much a god himself. So for the sake of telling them apart, he’s blue and he likes to stroll around with his dick hanging out. He can teleport, he can see into the future, he blows shit up. He’s got actual superpowers. Regular God asks Blue God what have you done with these gifts?” Blue God says “I fell in love with a woman, I walked across the sun, and then I fell in love with another woman. I won the Vietnam War. But mostly I just stopped giving a shit about humanity.” God sighs. “Do I even need to ask how many people you’ve killed?” Blue guy shrugs. “A live body and a dead body have the same number of particles so it doesn’t matter. And it doesn’t matter how I answer your question because I know you’re sending me to hell.” “How do you know that?” asks God. Blue God sounds very sad when he softly says “Because I’m already there.” And so, a mere piston in the inevitable of time and space God does what he did and will do. He snaps his fingers and the hero goes to hell.
And now, we’ve got Jon. Dr. Manhattan. It's a neat moment of insight into his actions, motives, and how those are perceived by others (namely Laurie), and it's a nice thread of introduction to his previous actions to drop for audiences who haven't read the comics (actually, I can make this point about Adrien’s part of the joke too). Especially because most of what we get of Jon in-show is his relationship with Angela, his entire character arc really revolves around her and we don't see him portrayed as the contentious, unfeeling figure the world sees him as. So this sort of contrast between him as a figure and him as a person is very telling, doubly so coming from someone who it's clear knew him. And I really appreciate that there’s just as much stiffness as there is warmth to the Jon we the audience see - he’s kind, he’s loving, but he’s also very matter-of-fact and deterministic, and that bit of characterization really spans the gap between these two versions of him.
And so it’s been a long day at the pearly gates. All the heroes have gone to hell. His work done, God’s packing up to go home and then he notices someone waiting. But it’s not a hero, it’s just a woman. “Where did you come from?” asks God. “Oh I was just standing behind those other guys the whole time, you just didn’t see me.” “Did I give you a talent,” God asks. “No, none to speak of,” says the woman.  God gives her a good long look. “I’m so sorry. I’m embarrassed. Seriously, this almost never happens but I don’t know who you are.” And the woman looks at God and she quietly says “I’m the little girl who threw the brick in the air.” And a sound from above, something falling: the brick. God looks up but it’s too late. He never saw it coming. It hits him so hard, his brains shoot out his nose. Game over. He’s dead. And where does God go when he dies? He goes to hell. 
Into some Thoughts^TM that I haven’t seen anyone theorize yet(?): I think God is meant to be Lady Trieu, and even if Laurie wouldn’t know this yet that’s some brilliant fucking foreshadowing. It's not as exact, but enough parallels are there that I think they're purposeful. It makes Trieu out as the ultimate judge of everyone - and in a way, she is. She sees herself as the most deserving of power of everyone, and it's her who kills Dr. Manhattan - sends him to hell, you could say, and he knows she's going to do it. It also hints at how she's going to die too, crushed by her machine falling from the sky like the brick, because she didn't expect anyone would be capable of stopping her. And where does God go when he dies? He goes to hell. Trieu isn't ultimately above the others, and she's subject to their justice as they are to hers. 
Fitting too that Laurie is involved with the plan to stop Trieu, since, as I said I’d come back to, the girl who threw the brick is Laurie herself. Her depiction of herself in this way is representative, perhaps, of Laure's own feelings on vigilantism and what justice is, and that she's the force that's going to bring down these overblown personalities and their many incorrect uses of their abilities. Given this, it's interesting to think how the "failed" joke at the beginning connects, given that Laurie's dad is the bricklayer, and he's definitely... not a good person, or at least not in this continuity. But I wonder if it's indicative of what Laurie mentions about her parents training her up to do vigilante stuff (especially since she’s based in part(?) on a member of the Minutemen from the comic), and how she feels about her father and his work. If the brick is symbolic of his work as a vigilante, is Laurie throwing the brick in the air, and ultimately taking down the threat at the top, meant to indicate how she sees herself using what she learned from him, or - maybe and - a disrespect for his work based on her justified hatred of him?
Roll on snare drum. Curtains. Good joke. 
10 notes · View notes
maxksx · 6 years ago
Text
What if… Information Processing as Hyperobject
Capitalism is not a human invention, but a viral contagion, replicated cyberpositively across post-human space. Self-designing processes are anastrophic and convergent: doing things before they make sense. Time goes weird in tactile self-organizing space: the future is not an idea but a sensation. – Sadie Plant and Nick Land 
HYPERORGANISMS AND ZOMBIE SOCIETY
As I was reading R. Scott Bakker’s blog this morning, he had an interesting post The Zombie Enlightenment . In it he mentioned the notion of “…post-Medieval European society as a kind of information processing system, a zombie society”. Like many things this set my mind on hyperdrive. I was reminded of my recent reading of Timothy Morton’s interesting work Hyperobjects: Philosophy and Ecology after the End of the World where he describes a hyperobject:
the term hyperobjects to refer to things that are massively distributed in time and space relative to humans.  A hyperobject could be a black hole. A hyperobject could be the Lago Agrio oil field in Ecuador, or the Florida Everglades. A hyperobject could be the biosphere, or the Solar System. A hyperobject could be the sum total of all the nuclear materials on Earth; or just the plutonium, or the uranium. A hyperobject could be the very long-lasting product of direct human manufacture, such as Styrofoam or plastic bags, or the sum of all the whirring machinery of capitalism. Hyperobjects, then, are “hyper” in relation to some other entity, whether they are directly manufactured by humans or not.1
Morton’s “the sum of all the whirring machinery of capitalism” brought to mind Nick Land’s adaptation of Deleuze and Guattari’s accelerating capital as a informational entity that is auto-organizing energy, matter, and information toward a technological Singularity (i.e., “There’s only really been one question, to be honest, that has guided everything I’ve been interested in for the last twenty years, which is: the teleological identity of capitalism and artificial intelligence” – here).  We’ve seen how the debt system in D&G is part of an algorithmic memory or processing system to mark and channel desire or flows of energy-matter: here and here (i.e., “Society is not exchangist, the socious is inscriptive: not exchanging but marking bodies, which are part of the earth. We have seen that the regime of debt is the unit of alliance, and alliance is representation itself. It is alliance that codes the flows of desire and that, by means of debt, creates for man a memory of words (paroles).” and: “Man must constitute himself through repression of the intense germinal influx, the great biocosmic memory that threatens to deluge every attempt at collectivity.”). Of course they spoke in anthropological terms that seem quaint now in our computational jargon age which brings me to Ceasr Hidalgo.
We build against sadism. We build to experience the joy of its every fleeting defeat. Hoping for more joy, for longer, each time, longer and stronger; until, perhaps, we hope, for yet more; and you can’t say it won’t ever happen, that the ground won’t shift, that it won’t one day be the sadisms that are embattled, the sadisms that are fleeting, on a new substratum of something else, newly foundational, that the sadisms won’t diminish or be defeated, that those for whom they are machinery of rule won’t be done. …..– China Miéville, On Social Sadism
EMERGENCE, SOLIDITY, AND COMPUTATION: CAPITAL AS HYPERORGANISM
In Cesar Hidalgo’s Why Information Grows: The Evolution of Order, from Atoms to Economies where he describes the basic physical mechanisms that contribute to the growth of information. These include three important concepts: the spontaneous emergence of information in out-of-equilibrium systems (the whirlpool example), the accumulation of information in solids (such as proteins and DNA), and the ability of matter to compute.2
Explicating this he tells us that the first idea connects information with energy, since information emerges naturally in out-of-equilibrium systems. These are systems of many particles characterized by substantial flows of energy. Energy flows allow matter to self-organize. (Hidalgo, KL 2448) The second idea is that the mystery of the growth of information is that solids are essential for information to endure. Yet not just any solid can carry information. To carry information, solids need to be rich in structure.(Hidalgo, KL 2465) And, finally, energy is needed for information to emerge, and solids are needed for information to endure. But for the growth of information to explode, we need one more ingredient: the ability of matter to compute (i.e., the final step is intelligence and auto-awareness, decisional and ecological). (Hidalgo, KL 2475) As he remarks:
The fact that matter can compute is one of the most amazing facts of the universe. Think about it: if matter could not compute, there would be no life. Bacteria, plants, and you and I are all, technically, computers. Our cells are constantly processing information in ways that we poorly understand. As we saw earlier, the ability of matter to compute is a precondition for life to emerge. It also signifies an important point of departure in our universe’s ability to beget information. As matter learns to compute, it becomes selective about the information it accumulates and the structures it replicates. Ultimately, it is the computational capacities of matter that allow information to experience explosive growth.(Hidalgo, KL 2477-2482).
Of course Hidalgo like many current thinkers never asks the obvious questions of what’s behind this if anything, is there a telos to this IP initiative of the universe, is it all blind accident and process, a sort of accidental start-up algorithm in matter that suddenly began with the Big Bang; a part of the nature of things from the beginning? He describes self-organizing matter, its need for more permanent and enduring structures to support its processes, and then the emergence of computation or intelligence: “these objects allow us to form networks that embody an increasing amount of knowledge and knowhow, helping us increase our capacity to collectively process information” (Hidalgo, KL 2518).
I’ve never like the “self” in self-organizing – just seems too human, all too human a concept. Maybe auto-organizing should be its replacement. Either way what needs to be elided is the notion that there is some essential or core being behind the appearances directing this auto-organizing activity. It’s more a blind process having to do with the actual aspects of quantum and relativity theory in our universe rather than some notion of a personality behind things (i.e., God or Intelligence). When does matter become purposeful, attain a teleological goal oriented ability to organize itself and its environment? Is this what life is? Is life that threshold? Or something else? Many creatures alive do not need an awareness of auto-distancing from their environment to appear purposeful; and, or not. Think of those elder creatures of the oceans, the predators, the sharks, their drive to hunt, select, kill etc. Is this a telos, or just the organic mode of information as blind process working in an environment to satisfy the base requirements to endure?
We as humans seem to think we’re special, situated as the exception rather than the rule. But are we? No. What if we are like all other durable organic systems just the working out of blind processes and algorithms of information processing as it refines itself and emerges into greater and greater complexity? But this is to assume that “us” will remain human, that this teleological or non-teleological process ends with the human species. But does it? Or we but the transitional object of some further emergence, one that would be even more permanent, more adaptive to self-organizing matter, more enduring, more viable computationally oriented? I think you know where I’m going here: the machinic phylum, the emergence of AI, Robotics, Nanotech, ICT’s etc. that we see all around us, or these not the further immanent self-organization of matter into greater and more lasting forms that will eventually outpace the organic hosts that supported their emergence? Or we not seeing the edge of this precipice in such secular myths as posthumanism and transhumanism? The Technological Singularity as a more refined emergence of this self-organizing information processing entity or entities: this collective or hive, even distributed intelligence emerging in such external devices?
Hidalgo mentions the personbyte theory which suggests a relationship between the complexity of an economic activity and the size of the social and professional network needed to execute it. Activities that require more personbytes of knowledge and knowhow need to be executed by larger networks. This relationship helps explain the structure and evolution of our planet’s industrial structures. The personbyte theory implies that (1) simpler economic activities will be more ubiquitous, (2) that diversified economies will be the only ones capable of executing complex economic activities, (3) that countries will diversify toward related products, and (4) that over the long run a region’s level of income will approach the complexity of its economy, which we can approximate by looking at the mix of products produced and exported by a region, since products inform us about the presence of knowledge and knowhow in a region. (Hidalgo, KL 2524-2530).
In this sense capitalism is an informational entity or hyperobject, a self-organizing structure for energy, matter, and information to further its own emergence through temporal computational algorithms. As Hidalgo reiterates this dance of information and computation is powered by the flow of energy, the existence of solids, and the computational abilities of matter. The flow of energy drives self-organization, but it also fuels the ability of matter to compute. Solids, on the other hand, from proteins to buildings, help order endure. Solids minimize the need for energy to produce order and shield information from the steady march of entropy. Yet the queen of the ball is the emergence of collective forms of computation, which are ubiquitous in our planet. Our cells are networks of proteins, which form organelles and signaling pathways that help them decide when to divide, differentiate, and even die. Our society is also a collective computer, which is augmented by the products we produce to compute new forms of information. (Hidalgo, KL 2532-2537).
CROSSING THE RUBICON?
Yet, is the organic base the most efficient? Are we not already dreaming of more permanent structures, more enduring and durable robotics, machinic, etc.? Hidalgo is hopeful for collective humanity, but is this necessarily so? It looks more like we are but a form of matter that might have been useful up to this point, but that is becoming more and more apparent as obsolete and limited for the further auto-organization of information in the future. What Kant termed finitude is this limiting factor for humans: the human condition. Are we seeing the power of matter, energy, and informational auto-organization about to make the leap from human to a more permanent form? A crossing of the Rubicon from which humanity may not as a species survive? Possibly even merging ourselves into more permanent structures to support information and intelligence in its need to escape the limits of planetary existence?
The questions we need to be raising now are such as: What happens to humans if machines gradually replace us on the job market? When, if ever, will machines outcompete humans at all intellectual tasks? What will happen afterward? Will there be a machine-intelligence explosion leaving us far behind, and if so, what, if any, role will we humans play after that?3 Max Tegmark* lists the usual ill-informed suspects on the blogosphere circuit that cannot and will not ever answer this:
Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists seem incapable of writing an AI article without a picture of a gun-toting robot.
“ It’s impossible”: As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.
“ It won’t happen in our lifetime”: We don’t know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at a recent conference put the odds above 50 percent, so we’d be foolish to dismiss the possibility as mere science fiction.
“ Machines can’t control humans”: Humans control tigers not because we’re stronger but because we’re smarter, so if we cede our position as the smartest on our planet, we might also cede control.
“ Machines don’t have goals”: Many AI systems are programmed to have goals and to attain them as effectively as possible.
“ AI isn’t intrinsically malevolent”: Correct— but its goals may one day clash with yours. Humans don’t generally hate ants, but if we wanted to build a hydroelectric dam and there was an anthill there, too bad for the ants.
“ Humans deserve to be replaced”: Ask any parent how they’d feel about your replacing their child by a machine and whether they’d like a say in the decision.
“ AI worriers don’t understand how computers work”: This claim was mentioned at the above-mentioned conference and the assembled AI researchers laughed hard. (Brockman, pp. 44-45)
Tegmark will – as Hidalgo did – speak of humans as information processing systems:
we humans discovered how to replicate some natural processes with machines that make our own wind, lightning, and horsepower. Gradually we realized that our bodies were also machines, and the discovery of nerve cells began blurring the borderline between body and mind. Then we started building machines that could outperform not only our muscles but our minds as well. So while discovering what we are, will we inevitably make ourselves obsolete? (Brockman, p. 46)
That’s the hard question at the moment. And, one still to be determined. Tegmark’s answer is that we need to think this through: “The advent of machines that truly think will be the most important event in human history. Whether it will be the best or worst thing ever to happen to humankind depends on how we prepare for it, and the time to start preparing is now. One doesn’t need to be a superintelligent AI to realize that running unprepared toward the biggest event in human history would be just plain stupid.” (Brockman, p. 46)
INVENTING A MODEL OF THE FUTURE? HYPERSTITIONAL ENERGETICS?
What would be interesting is to build an informational model, a software application that would model this process from beginning to now of the universe as an auto-organizing system of matter, energy, and information into the various niches of complexification as it stretches over the temporal dimensions as a hyperobject or superorganism. Watch it ins the details of a let’s say Braudelaian input of material economic and socio-cultural data of the emergence of capitalism as a hyperobject over time and its complexification up to this projected Singularity. Obviously one would use statistical and probabilistic formulas and mathematical algorithms to accomplish this with sample data, etc. Either way it would show a possible scenario of the paths forward of human and machinic systems as they converge/diverge in the coming years. I’ll assume those like the complexity theorists in New Mexico university have worked such approximations? I need to study this… someone like a Stuart Kauffmann? Such as this essay: here:
The universe is open in being partially lawless at the quantum-classical boundary (which may be reversible). As discussed, the universe is open upward in complexity indefinitely. Based on unprestatable Darwinian exaptations, the evolution of the biosphere, economy and culture seem beyond sufficient law, hence the universe is again open. The unstatable evolution of the biosphere opens up new Adjacent Possible adaptations. … It seems true both that the becoming of the universe is partially beyond sufficient natural law, and that opportunities arise and disappear and either ontologically, or epistemologically, or lawlessly, may or may not be taken, hence can change the history of our vast reaction system, perhaps change the chemistry in galactic giant cold molecular clouds, and change what happens in the evolution of the biosphere, economy and history.
Sounds familiar in the sense of Meillassoux’s attack on sufficient causation (i.e., ‘principle of sufficient reason’), etc. when Kauffman mentions “the evolution of the biosphere, economy and culture seem beyond sufficient law, hence the universe is again open”. Of course Kauffman’s thesis is: “a hypopopulated chemical reaction system on a vast reaction graph seems plausibly to exhibit, via quantum behavior and decoherence, the acausal emergence of actual molecules via acausal decoherence and the acausal emergence of new ontologically real adjacent possibles that alter what may happen next, and give rise to a rich unique history of actual molecules on a time scale of the life time of the universe or longer. The entire process may not be describable by a law.” In other words its outside “sufficient reason”.
In his The Blank Swan: The End of Probability  Elie Ayache is like Land tempted to see Capitalism as a hyperobject or entity, saying, “What draws me to Deleuze is thus my intention of saying the market as univocal Being”.4 He goes on to say:
The problem with the market is that it is immanence incarnate. It has no predefined plane. Much as I had first intended derivatives and their pricing as my market and my surface, I soon found myself making a market of the writings of Meillassoux, Badiou and Deleuze. They became my milieu of immanence. The plane of immanence on which to throw my concept of the market soon became a plane of immanence on which to deterritorialize thought at large. I soon became tempted to remake philosophy with my concept of the market rather than remake the market with a new philosophy. The market became a general metaphor for writing, the very intuition of the virtual with which it was now possible to establish contact. I was on my way to absolute deterritorialization, and the question became how to possibly deliver this ‘result’ otherwise than in a book that was purely philosophical. (Ayache, pp. 303-304)
Of course he’s dealing with specifics of trading in the derivatives market, etc., but one can extrapolate to a larger nexus of possibilities. As he suggests: “I soon became tempted to remake philosophy with my concept of the market rather than remake the market with a new philosophy. The market became a general metaphor for writing, the very intuition of the virtual with which it was now possible to establish contact. I was on my way to absolute deterritorialization, and the question became how to possibly deliver this ‘result’ otherwise than in a book that was purely philosophical.” This notion of both capital and thought making a pact of absolute deterritorialization seems to align with Hildalgo’s history of information theory and its own auto-organizational operations.
Ayache will like Land see the market as a unified entity: The market, as market, is one reality. It cannot be separated or differentiated by external difference.  It is an intensity: the intensity of the exchange, presumably. It follows no stochastic process, with known volatility or jump parameters. It is a smooth space, as Deleuze would say, not a striated space. (Ayache, p. 325)
As wells as an organism: What gets actualized and counter-actualized (i.e. differentiated) here is the whole probability distribution, the whole range of possibilities, and the process is the process of differentiation (or distinction, or emergence, literally birth) of that put. The market differentiates itself literally like an organism, by ‘growing’ that put (like an animal grows a tail or like birds grow wings) and by virtually growing all the successive puts that our trader will care to ask about. (Ayache, p. 338) In his book Hidalgo mentions a curious statement: “As of today, November 11, 2014, “why information grows” returns four hits on Google. The first one is the shell of an Amazon profile created for this book by my UK publisher. Two of the other hits are not a complete sentence, since the words are interrupted with punctuation. (By contrast, the phrase “why economies grow” returns more than twenty-six thousand hits.)”(Hidalgo, KL 2645) So that the notion of the market as an entity that grows informationally seems almost apparent to many at the moment.
Hidalgo will also mention the father of neoliberalism Friedrich Hayek who famously pointed this out in a 1945 paper (“ The Use of Knowledge in Society,” American Economic Review 35, no. 4 [1945]: 519– 530). There, Hayek identified money as an information revelation mechanism that helped uncover information regarding the availability and demand of goods in different parts of the economy. (Hidalgo, KL 3060) This notion of money as a “revelation mechanism” fits into current trends of Bitcoin as an virtual apparatus for informational mechanisms and market growth of Capital as a Hyperorganism.
THE VIRTUAL ECONOMY: BLOCKCHAIN TECHNOLOGY AND BITCOIN-ECONOMICS
Some say we are the Age of Cryptocurrency in which Bitcoin and Blockchain technology will move things into the virtual arena where energy, matter, and information are enabled to push forward this growth process in an ever accelerating manner. (see here) Part of what their terming the programmable economy. As Sue Troy explains it the programmable economy — a new economic system based on autonomic, algorithmic decisions made by robotic services, including those associated with the Internet of Things (IoT) — is opening the door to a range of technological innovation never before imagined. This new economy — and more specifically the concept of the blockchain and metacoin platforms that underpin it — promises to be useful in improving an astonishingly varied number of issues: from reducing forgery and corruption to simplifying supply chain transactions to even greatly minimizing spam. In her interview she states:
Valdes explained the technical foundations of the blockchain ledger and the programmable economy. He described the programmable economy as an evolution of the API economy, in which businesses use APIs to connect their internal systems with external systems, which improves the businesses’ ability to make money but is limited by the fact that the systems are basically siloed from one another. The Web was the next step in the evolution toward the programmable economy, he said, because it represents a “global platform for programmable content. It was decentralized; it was a common set of standards. Anyone can put up a Web server and plug into this global fabric for content and eventually commerce and community.”
The programmable economy, Valdes said, is enabled by “a global-scale distributed platform for value exchange. … The only thing that’s uncertain is what form it will take.” Valdes pointed to Bitcoin, which uses blockchain ledger technology, as a prominent example of a “global-scale, peer-to-peer, decentralized platform for global exchange.”
Ultimately Valdes states that the idea of programmability can be extended to the corporate structure, Valdes said. Today the rules of incorporation are fixed, and the corporation is represented by its employees and a board of directors. In the future, corporations could be “more granular, more dynamic and untethered from human control”.
Of course this fits into the notion that the future City States or Neocameral Empires will also become “more granular, more dynamic and untethered from human control” as machinic intelligence and other convergences of the NBIC technologies take over more and more from humans.
One want to take a step back and get one’s breath and say: “Whoa, there partner, just wait a minute!” But by the time we institute some ethical or governmental measures it will like most of history be far too late to stop or even slow down this juggernaut of growing informational hyperorganisms. As one advocated suggested there will come a time when everything is connected in an information environment: “You can put monitors in the anything to measure or quantify exchanges, the sensors are connected to smart contracts, the contracts are changing as the exchanges take place, so you have this dynamic process that’s taking place in the supply chain, constantly refreshing the economic conditions that surround it…” (see). In this information programmable economy as Troy sees it Organizations of the future will need a different organizational model, he said. “You see society changing in a sharing, collaborative environment. Think about it being the same internally.”
As one pundit Jacob Donnelly tells it Bicoin is in existential crisis, yet it has a bright future. What is increasingly likely is that the future of bitcoin is bright. It is the seventh year in the development of this network. It takes years to build out a protocol, which is what bitcoin is. As Joel Spolsky says, “Good software takes 10 years. Get used to it.”
“Bitcoin is comparable to the pre-web-browser 1992-era Internet. This is still the very early days of bitcoin’s life. The base layer protocol is now stable (TCP/IP). Now engineers are building the second layer (HTTP) that makes bitcoin usable for average people and machines,” Jeff Garzik, founder of Bloq and Core developer of bitcoin, told me.
Once the infrastructure is built, which still has many more years ahead of it, with companies like Bloq, BitGo, 21.co, and Coinbase leading the charge, we’ll begin to see solid programs built in the application layer.
But even while we wait for the infrastructure to be built, it’s clear that bitcoin is evolving. Bitcoin is not perfect. It has a lot of problems that it is going to have to overcome. But to label it dead or to call for it to be replaced by something new is naive and shortsighted. This battle in the civil war will end, likely with Bitcoin Classic rolling out a hard fork with significant consensus. New applications will be built that provide more use cases for different audiences. And ultimately, the Internet will get its first, true payment protocol.
But Bitcoin is seven years old. It will take many years for the infrastructure to be laid and for these applications to reach critical mass. Facebook had nearly 20 years after the browser was released to reach a billion users. To imagine bitcoin’s true potential, we need to think in decades, not in months or years. Fortunately, we’re well on our way.
FUTURE TECH: AUGMENTED IMMERSION AND POLICING INFORMATION
One imagines a day when every aspect of one’s environment internal/external, intrinsic/extrinsic is programmable and open to revision, updates, changes, exchanges, etc. in an ongoing informational economy that is so invisible and ubiquitous that even the machines will forget they are machines: only information growth will matter and its durability, expansion, and acceleration.
In an article by Nicole Laskowski she tells us augmented and virtual reality technologies may be better suited for the enterprise than the consumer market as these technologies become more viable. Google Glass, an AR technology, for example, raised ire over privacy concerns. But in the enterprise? Employees could apply augmented and virtual reality technology to build rapid virtual prototypes, test materials, and provide training for new employees — all of which can translate into productivity gains for the organization.
“The greatest level of adoption is around the idea of collaboration,” Soechtig said. Teams that aren’t in the same physical environment can enter a virtual environment to exchange information and ideas in a way that surpasses two-dimensional video conferencing or even Second Life Enterprise. Nelson Kunkel, design director at Deloitte Digital, described virtualized collaboration as an “empathetic experience,” and Soechtig said the technology can “take how we communicate, share ideas and concepts to a completely new level.”
For some companies, the new level is standard operating procedure. Ford Motor Company has been using virtual reality internally for years to mock up vehicle designs at the company’s Immersion Lab before production begins. Other companies, such as IKEA, are enabling an augmented reality experience for the customer. Using an IKEA catalogue and catalogue app, customers can add virtual furnishings to their bedrooms or kitchens, snap a photo and get a sense for what the items will look like in their homes. And companies such as Audi and Marriott are turning VR headsets over to customers to help them visually sift through their choices for vehicle customizations and virtually travel to other countries, respectively.
Vendors, too, see augmented and virtual reality as an opportunity — from Google and its yet-to-hit-the-market Google Glass: Enterprise Edition to Facebook and its virtual reality headset, Oculus Rift, to Microsoft and its HoloLens, which it describes as neither augmented nor virtual reality, but rather a “mixed reality that lets you enjoy your digital life while staying more connected to the world around you,” according to the website. All three companies have eyes on the enterprise.
NEOCAMERALISM OR GOVERNANCE OF INFORMATION
Is this techno-optimism or its opposite, utopia or dystopia… we’ll we even be there to find out? In his book The Disordered Police State: German Cameralism as Science and Practice on the old princedoms of the Cameral states of Germany Andre Wakefield comments:
The protagonist of my story is the Kammer, that ravenous fiscal juridical chamber that devoured everything in its path. History, I am told, is only as good as its sources, and the cameral sciences, which purported to speak publicly about the most secret affairs of the prince, were deeply dishonest. We cannot trust them. And because many of the most important cameral sciences were natural sciences, the dishonesty of the Kammer has been inscribed into the literature of science and technology as well. There is no avoiding it.5
The German cameralists were the writer-administrators and academics who had provided a blueprint for governance in early modern Germany. Much like our current systems of academic and Think Tank experts who provide the base blueprints for governance around the world today.
When we read many of the books about our future it is spawned in part and funded by such systems of experts, academics, and governmental or corporate powers seeking to convince, manipulate, and guide in the very construction of a future tending toward their goals and agendas. A sort of policing of culture, a policy is a policing and movement of the informational context to support these entities and organizations.
In the future we will indeed program many capabilities that closely resemble those arising from ‘true’ intelligence into the large-scale, web-based systems that are likely to increasingly permeate our societies: search engines, social platforms, smart energy grids, self-driving cars, as well as a myriad other practical applications. All of these will increasingly share many features of our own intelligence, even if lacking a few ‘secret sauces’ that might remain to be understood.6
One aspect of this I believe people and pundits overlook is that the large datastores needed for this will need knowledge workers for a long while to input the data needed by these advanced AI systems. I believe instead of jobs and work being downsized by automation that instead it will be opened up into ever increasing informational ecosystems that we have yet to even discern much less understand. I’m not optimistic about this whole new world, yet it is apparent that it is coming and organizing us as we organize it. Land spoke of the hyperstition as a self-replicating prophecy. If the books, journals, and other memes elaborated around this notion of information economy and exchange are valid we are moving into this world at light-speed and our older political, social, and ethical systems are being left far behind and unable to cope with this new world of converging technologies and information intelligence.
More and more our planet will seem an intelligent platform or hyperorganism that is a fully connected biospheric intelligence or sentient being of matter, energy, and information, a self-organizing entity that revises, updates, edits, and organizes its information on climate, populations, bioinformatics, etc. along trajectories that we as humans were incapable as an atomistic society. Change is coming… but for the better no one can say, yet. Eerily reminiscent of Ovid’s poem of the gods Metamorphosis humans may merge or converge with this process to become strangely other… at once monstrous and uncanny.
(I’ll take this up in a future post…)
*Max Tegmark: Physicist, cosmologist, MIT; scientific director, Foundational Questions Institute; cofounder, Future of Life Institute; author, Our Mathematical Universe
Morton, Timothy (2013-10-23). Hyperobjects: Philosophy and Ecology after the End of the World (Posthumanities) (Kindle Locations 106-111). University of Minnesota Press. Kindle Edition.
Hidalgo, Cesar (2015-06-02). Why Information Grows: The Evolution of Order, from Atoms to Economies (Kindle Locations 2446-2448). Basic Books. Kindle Edition.
Brockman, John (2015-10-06). What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence (p. 43). HarperCollins. Kindle Edition.
Ayache, Elie (2010-04-07). The Blank Swan: The End of Probability (p. 299). Wiley. Kindle Edition.
Andre Wakefield. The Disordered Police State: German Cameralism as Science and Practice (Kindle Locations 379-382). Kindle Edition.
Shroff, Gautam (2013-10-22). The Intelligent Web: Search, smart algorithms, and big data (p. 274). Oxford University Press, USA. Kindle Edition.
https://socialecologies.wordpress.com/2016/02/15/what-if-information-processing-as-hyperobject/
2 notes · View notes
myfeeds · 2 years ago
Text
Cryo-imaging lifts the lid on fuel cell catalyst layers
Manufacturers integrate these catalysts in complex assemblies called catalyst layers. Until now, they had to do so without a detailed picture of the resulting structure, as traditional imaging processes almost always cause some degree of damage. Vasiliki Tileli, head of the Laboratory for in-situ nanomaterials characterization with electrons in the School of Engineering, has found a way around this challenge. By imaging catalysts and their environment at below-freezing temperatures using cryogenic transmission electron tomography and processing the images with deep learning, she and her colleagues have succeeded in revealing, for the first time, the nanoscale structure of catalyst layers. “We’re still far away from PEMFCs without platinum, which is very expensive, so in the short term, we need to reduce platinum loading to make this technology viable for mass production. It’s therefore imperative to understand how platinum sits in relation to other materials within the catalyst layer, to increase the surface area contact required for chemical reactions to take place,” Tileli explains. “That’s why it’s quite an achievement to image these catalysts in three dimensions; before, it was impossible to have the right contrast between the different catalyst layer components.” The work has recently been published in the journal Nature Catalysis. Better preservation; higher resolution During imaging using conventional electron microscopy, delicate catalyst layer samples often become damaged by electron beams, causing materials to shrink or deform. By carrying out the imaging in-situ at cryo-temperatures, Tileli and her team were able to preserve most of the catalyst layer’s morphology. Then, they used a machine learning algorithm to more accurately denoise and classify the images, allowing them to achieve a higher image resolution than had ever previously been possible. Crucially, the scientists were able to reveal the heterogenous thickness of a porous polymer layer on the catalysts called ionomer. Ionomer thickness strongly influences how well platinum catalysts perform. “The ionomer must have a certain thickness for the catalytic reactions to happen efficiently. Because we could do a full reconstruction of catalyst layers with limited damage to the structure, we could show, for the first time, how much platinum is covered with ionomer and the thickness of that coverage,” Tileli explains. Such information could be a gold mine for catalyst manufacturers, who could use it to produce catalysts with more platinum particles that are covered by the right amount of ionomer — and that therefore perform optimally. “The cryo-aspect is the key component of this study. Ionomers are like proteins: they are soft, and require freezing conditions to stabilize and protect their structure,” Tileli says. “I think this advanced technique will therefore be useful not just for facilitating the mass manufacturing of PEMFCs through optimized platinum use, but also for many different materials science and energy applications — for example, battery storage, water electrolysis, and energy conversion systems in general.”
0 notes
autopackmachines · 3 years ago
Text
What to Look For in a Powder Filling Machine
If you are thinking of buying a powder filling machine, you may be wondering what to look for. Here are some factors to consider: capacity, servo and features. Read on to learn more about these machines. They are useful in a variety of applications. A powder filling machine is a vital piece of equipment for the manufacturing process as it can save you a lot of time and money. But which machine is right for you?
Features of a Powder Filling Machine
When choosing a powder filling machine, consider the features that are most important to you. These include the size, weight and particle characteristics of the dust. These factors will determine the type of machine that will work best for your needs. Read on to learn more about some of the most important features. You should also look for safety features and ease of operation. Once you have determined the size and type of machine you need, you will be able to choose a quality machine that meets your requirements.
The size of the filling head and the amount of material to be filled will affect the type of powder filling machine you choose. There are machines that can handle large and small containers, and some are specially made for both. Although they may be more expensive, they will provide better results than cheaper machines. Also, look for dosing systems that can handle different powder sizes and weights. They should also be able to handle various shapes and weights of powder.
Types of Machines
Powder filling machines can be used in many applications. It is necessary to properly maintain these machines so that they can perform their functions efficiently. Improper maintenance of a machine can lead to downtime and inefficiency. With proper maintenance, you can expect your machine to operate efficiently for many years. The following tips will help you maintain your machines and extend their life. Listed below are some of the more common types of machines and how they differ from each other.
Drill Type Powder Filling Machines. These machines are capable of filling various types of food and beverages. They have a feed port where powder products are placed. They are typically designed to work with a range of powder products, including liquids and granules. The augers can be adjusted to fill containers of different sizes, from sachets to bottles. Some of these machines have an overflow collector. The mainframe heads protect the machine from damage and are positioned in a balanced manner.
Capacity
Buying a powder filling machine will require you to know the characteristics of the powder you will be filling. Dust characterization includes its characteristics, properties and uses. In some regions, the CCC mark is a mandatory safety mark. Choosing the capacity of a machine based on your needs will ensure that you don't go over your budget. A good machine should be able to hold the volume of product you need.
The size of your machine will depend on how much powder you plan to fill in a single shot. This is important because a small machine may have limited capacity and be difficult to use effectively. You should also choose a machine that has a high output rate. In addition, you should choose a machine made of stainless steel to avoid any corrosion problems. You should also consider its accuracy.
Servo Motor
The servo motor is a common component of a powder filling machine. They can be single or multi-lane filler sticks, requiring individual servo motors in each filler lane. These motors offer high levels of precision and allow individual adjustment of the filling strips. Turpack, a leading powder filling machine manufacturer, uses Kollmorgen or Siemens servo motors for their machine designs.
All-Fill used a servo motor for the first time to control semi-automatic screw filling machines. Servo motors have continued to evolve as the most accurate method of filling. They started with the B-SV-400 model. Since then, they have gone through several adaptations, including the B-SV-600 and SHA-SV-600. Today, the servo motor filling machine is available in many models, including B-SV-600 and DHA-SV-600.
cGMP compliance
A powder filling machine should be certified as cGMP compliant and must meet GMP standards. This is important for powder and capsule encapsulation and also means that all materials used in the machine must be FDA approved. The machine must also have an automated cleaning system and the manufacturer must have all the documentation for their specific equipment. It is best to find a machine that is certified as cGMP compliant and provides documentation.
If you are looking for a cGMP compliant powder filling machine, you have many options. You can choose a single or dual wheel model, with performance ranging from 60 VPM to 200 VPM. There are also vacuum driven powder wheels that can achieve more filling accuracy. A powder filling machine can also fill between 50 and 1500 mg in a single dose. Source: https://autopackmachine.livejournal.com/8724.html
0 notes
cic2022 · 3 years ago
Text
SAG Mill
https://www.cicemech.com/products/sag-mill/
Semi-autogenous mills, or SAG mills as they are often called, can perform grinding operations equivalent to two or three stages of crushing and screening. SAG mills are characterized by their large diameter and short length as compared to ball mills. The inside of the mill is lined with lifting plates to lift the material inside the mill, where it then falls off the plates onto the rest of the ore charge.SAG mills are often used in grinding operations of modern concentrators, which can directly produce particle size of finished products or prepare feed materials for the downstream grinding sections.
 Introduction of SAG Mill
Like ball mill and rod mill, SAG mill also belongs to horizontal placed cylinder grinding equipment, the working principle is basically the same, which by means of processed material itself (or add a small amount of medium) in the simple body of continuous impact, rolling grinding and make the material crushing. The filling rate of steel ball in the shell of the semi-autogenous mill is a main item to distinguish SAG mill and autogenous mill. In order to eliminate the impact of the accumulation of rock, the autogenous mill allows to add about 3% of the ball (φ80 ~ φ120), SAG mill steel ball filling is generally 8% ~ 12%, even up to 15%.
 Advantages of SAG Mill
Reduce Lifetime Costs
 The full range of specifications and wide range of applications enable SAG mills to perform grinding operations with fewer configurations than conventional mills. This helps to reduce the investment and maintenance costs of the SAG mill circuit.
 Wide Application
 The scope of application is wide due to complete specifications of SAG mill. The semi-autogenous mill can be equivalent to two or three stage crushing and screening, rod mill, and part or all of the ball mill grinding operation effect.
 SAG mill is also the best solution for wet grinding. In this regard, the crushing and screening process is difficult to complete.
 Efficient Automated Operation
 From circuit design to operation and optimization, our process experts help you create an efficient automated process that ensures the grinding results you expect.
 Through automatic operation, electricity can be saved, grinding medium and liner consumption can be reduced while increasing production capacity.
 Intelligent Grinding Expert System
 Unique grinding expert system can comprehensively monitor the real-time operation of the SAG mill, grinding status, and material parameters. Through computer control system, make the man-machine interaction effectively, save manpower, and improve accuracy and capacity.
 Learn more about the Intelligent Grinding Expert System
 According to professional data sources, by far the largest SAG mill is 42' (12.8m) in diameter, powered by a 28 MW (38,000 HP) motor. While CIC is currently capable of manufacturing SAG mills with a maximum diameter of 21' (6.5m)
Tumblr media
1 note · View note