#inventive principle of parameter change
Explore tagged Tumblr posts
Text
CNC development history and processing principles

CNC machine tools are also called Computerized Numerical Control (CNC for short). They are mechatronics products that use digital information to control machine tools. They record the relative position between the tool and the workpiece, the start and stop of the machine tool, the spindle speed change, the workpiece loosening and clamping, the tool selection, the start and stop of the cooling pump and other operations and sequence actions on the control medium with digital codes, and then send the digital information to the CNC device or computer, which will decode and calculate, issue instructions to control the machine tool servo system or other actuators, so that the machine tool can process the required workpiece.

1. The evolution of CNC technology: from mechanical gears to digital codes
The Beginning of Mechanical Control (late 19th century - 1940s)
The prototype of CNC technology can be traced back to the invention of mechanical automatic machine tools in the 19th century. In 1887, the cam-controlled lathe invented by American engineer Herman realized "programmed" processing for the first time by rotating cams to drive tool movement. Although this mechanical programming method is inefficient, it provides a key idea for subsequent CNC technology. During World War II, the surge in demand for military equipment accelerated the innovation of processing technology, but the processing capacity of traditional machine tools for complex parts had reached a bottleneck.
The electronic revolution (1950s-1970s)
After World War II, manufacturing industries mostly relied on manual operations. After workers understood the drawings, they manually operated machine tools to process parts. This way of producing products was costly, inefficient, and the quality was not guaranteed. In 1952, John Parsons' team at the Massachusetts Institute of Technology (MIT) developed the world's first CNC milling machine, which input instructions through punched paper tape, marking the official birth of CNC technology. The core breakthrough of this stage was "digital signals replacing mechanical transmission" - servo motors replaced gears and connecting rods, and code instructions replaced manual adjustments. In the 1960s, the popularity of integrated circuits reduced the size and cost of CNC systems. Japanese companies such as Fanuc launched commercial CNC equipment, and the automotive and aviation industries took the lead in introducing CNC production lines.
Integration of computer technology (1980s-2000s)
With the maturity of microprocessor and graphical interface technology, CNC entered the PC control era. In 1982, Siemens of Germany launched the first microprocessor-based CNC system Sinumerik 800, whose programming efficiency was 100 times higher than that of paper tape. The integration of CAD (computer-aided design) and CAM (computer-aided manufacturing) software allows engineers to directly convert 3D models into machining codes, and the machining accuracy of complex surfaces reaches the micron level. During this period, equipment such as five-axis linkage machining centers came into being, promoting the rapid development of mold manufacturing and medical device industries.
Intelligence and networking (21st century to present)
The Internet of Things and artificial intelligence technologies have given CNC machine tools new vitality. Modern CNC systems use sensors to monitor parameters such as cutting force and temperature in real time, and use machine learning to optimize processing paths. For example, the iSMART Factory solution of Japan's Mazak Company achieves intelligent scheduling of hundreds of machine tools through cloud collaboration. In 2023, the global CNC machine tool market size has exceeded US$80 billion, and China has become the largest manufacturing country with a production share of 31%.
2. CNC machining principles: How code drives steel
The essence of CNC technology is to convert the physical machining process into a control closed loop of digital signals. Its operation logic can be divided into three stages:
Geometric Modeling and Programming
After building a 3D model using CAD software such as UG and SolidWorks, CAM software “deconstructs” the model: automatically calculating parameters such as tool path, feed rate, spindle speed, and generating G code (such as G01 X100 Y200 F500 for linear interpolation to coordinates (100,200) and feed rate 500mm/min). Modern software can even simulate the material removal process and predict machining errors.
Numerical control system analysis and implementation
The "brain" of CNC machine tools - the numerical control system (such as Fanuc 30i, Siemens 840D) converts G codes into electrical pulse signals. Taking a three-axis milling machine as an example, the servo motors of the X/Y/Z axes receive pulse commands and convert rotary motion into linear displacement through ball screws, with a positioning accuracy of up to ±0.002mm. The closed-loop control system uses a grating ruler to feedback position errors in real time, forming a dynamic correction mechanism.
Multi-physics collaborative control
During the machining process, the machine tool needs to coordinate multiple parameters synchronously: the spindle motor drives the tool to rotate at a high speed of 20,000 rpm, the cooling system sprays atomized cutting fluid to reduce the temperature, and the tool changing robot completes the tool change within 0.5 seconds. For example, when machining titanium alloy blades, the system needs to dynamically adjust the cutting depth according to the hardness of the material to avoid tool chipping.


3. The future of CNC technology: cross-dimensional breakthroughs and industrial transformation
Currently, CNC technology is facing three major trends:
Combined: Turning and milling machine tools can complete turning, milling, grinding and other processes on one device, reducing clamping time by 90%;
Additive-subtractive integration: Germany's DMG MORI's LASERTEC series machine tools combine 3D printing and CNC finishing to directly manufacture aerospace engine combustion chambers;
Digital Twin: By using a virtual machine tool to simulate the actual machining process, China's Shenyang Machine Tool's i5 system has increased debugging efficiency by 70%.


From the meshing of mechanical gears to the flow of digital signals, CNC technology has rewritten the underlying logic of the manufacturing industry in 70 years. It is not only an upgrade of machine tools, but also a leap in the ability of humans to transform abstract thinking into physical entities. In the new track of intelligent manufacturing, CNC technology will continue to break through the limits of materials, precision and efficiency, and write a new chapter for industrial civilization.
#prototype machining#cnc machining#precision machining#prototyping#rapid prototyping#machining parts
2 notes
·
View notes
Text
Cone calorimeter is a modern gadget or device; it is used to study the behavior of the fire of small samples of a variety of materials when they are in condensed phase. It is usually used in fire safety engineering field. Con calorimeter gathers data concerning combustion products, ignition time, and rate of release of heat, loss of mass and other parameters that are associated with the burning properties. According to Andrew (1994), samples of fuel are allowed by this device to be exposed different fluxes of heat on its surface. The principle behind the heat release rate measurement is usually based on Huggett’s principle. On this, gross heat of combustion of organic material is directly related to oxygen amount that is required for combustion. The name of con calorimeter is derived from the conical shape of radiant heater, which produces almost a uniform flux of heat over the sample surface which is under study. In 1970’s and 1980’s, importance of a bench scale tool that is reliable in measuring the heat release rate was in the process of being released. There were a number of such devices that had already been built before in various institutions. However, none of those devices was found to be appropriate for normal engineering use in the laboratory. This was mainly due to operation difficulty and measurement errors of those devices. It has been argued (Arthur 1995) that other instruments that were built such as substitution burner had the capability of giving good accuracy although they had limitations which included complexity, and installation and maintenance difficulties. This therefore, was an indication that a new device was required that had no such difficulties like the other devices. Later a device that used oxygen consumption principle was invented, it had a successful design, and it was termed as cone calorimeter. This design was first described in 1982 by NBS report. Cone calorimeter basic principle has until today been unchanged although there has several improvements and additions that have been made. The devices that are being used today have few parts that are identical to the one that was made in 1982. The significant changes that have been made include the introduction of systems which measure smoke optically and yield soot gravimetrically. Other major changes are not the functional changes but redesign so as to ease the use and operation reliability. In 1985, first cone calorimeter was built outside NIST. It was built in BRI Japan followed by another one in 1986 at Gent University and afterward in the same year other 3 commercial units were built and sold in the U.S. the number of cone calorimeters placed into service has been increasing significantly from 1986 up to date. Barbrauskas and Parker (1987) maintain that Cone calorimeter is most important bench scale instrument in fire testing field. The release of heat is the key measurement that is required in assessing the development of fire of the products and materials. Read the full article
0 notes
Text
Aryabhata and the Birth of Zero: A Legacy That Powers Modern AI and Machine Learning
Introduction
The concept of zero is often taken for granted in our modern world. It seems simple and ubiquitous, a basic number that underpins the technology we rely on daily. But the origins of zero trace back to a brilliant mind in ancient India - the mathematician and astronomer Aryabhata. His invention of zero was not just a mathematical innovation; it laid the foundation for the technological advances that shape our world today. From computer science to artificial intelligence (AI) and machine learning (ML), the legacy of Aryabhata’s work continues to drive us forward.
Aryabhata: The Visionary Mathematician and Astronomer
Aryabhata was an extraordinary thinker who lived in the 5th century during the Gupta period in India, a time of great intellectual and scientific advancements. His most profound contribution was the conceptualization of zero as a place-value placeholder in the decimal system, an idea that changed the way we perform arithmetic and set the stage for future mathematical developments. While his works spanned a range of subjects, from trigonometry to astronomy, it was his treatment of zero that had the most far-reaching implications.
Before Aryabhata, the idea of zero didn’t exist in the way we understand it today. While ancient civilizations, such as the Babylonians, had symbols for nothingness, Aryabhata took this concept further and formalized it in a way that allowed for the development of complex mathematical systems. This shift in thinking made it possible to perform calculations with ease and precision, including the ability to represent large numbers, and led directly to the development of algebra, calculus, and eventually the mathematical models behind modern-day computing.
The Evolution of Zero: From Ancient India to Modern Technology
Though Aryabhata’s invention of zero was ground breaking, its global acceptance took time. As the concept spread across the world through the Islamic Golden Age and eventually reached Europe, it became an integral part of mathematics. Today, zero is the cornerstone of the binary number system, the basis of all modern computing.
In the world of technology, zero plays a pivotal role in the way digital systems operate. The binary code that powers our computers and devices is composed of two digits: 1 and 0. These on/off states are what enable computers to perform complex calculations and store vast amounts of information. Every piece of technology - from the simplest calculator to the most advanced AI systems — relies on the concept of zero to function efficiently.
Zero’s Connection to Modern AI and Machine Learning
As AI and machine learning (ML) continue to revolutionize industries, it's fascinating to reflect on how these advanced technologies are rooted in the mathematical principles that Aryabhata pioneered. Machine learning, at its core, is about processing data, making predictions, and optimizing results — all of which require complex mathematical algorithms. Zero plays a key role in these processes, from initializing algorithms to managing the flow of data.
In machine learning, for instance, the process of training a model often involves adjusting parameters using optimization techniques like gradient descent. Zero, or values close to it, are crucial in determining how algorithms "learn" and adjust over time. In neural networks, another major component of modern AI, zero functions as a critical part of error correction and network tuning. These algorithms adjust their weights by calculating the difference between predicted and actual outcomes, sometimes approaching zero to refine the model for better accuracy.
The very ability to represent data, process it, and make intelligent decisions is based on complex mathematical models that wouldn't exist without the foundational role of zero. As AI technology continues to evolve, it's exciting to think about the endless possibilities and innovations that can emerge, all thanks to the humble yet powerful concept of zero.
How Pydun Technology is Shaping the Future of AI and ML
As the world of AI and machine learning continues to grow, the demand for skilled professionals is higher than ever. This is where Pydun Technology Private Limited comes in. As a leading provider of AI and ML training, Pydun Technology is committed to empowering individuals and businesses with the knowledge and skills they need to succeed in this rapidly evolving field.
At Pydun, learning is not just about theory; it’s about practical application. With a comprehensive curriculum designed to cover everything from the basics of machine learning to advanced AI techniques, Pydun Technology ensures that students are well-equipped to tackle the challenges of the modern tech landscape. Whether you're a beginner or an experienced professional, Pydun offers tailored training programs that meet you where you are and help you progress to the next level.
The world of AI and ML can seem intimidating, but with the right guidance and training, anyone can master it. Pydun Technology offers hands-on learning experiences, real-world projects, and expert instruction to help you understand the complex algorithms and mathematical concepts that power AI systems - including the essential role of zero. By learning from industry experts, you’ll gain the confidence and skills to contribute meaningfully to the rapidly expanding world of AI and ML.
Why Choose Pydun Technology?
Expert Instructors: Pydun’s team of instructors brings years of industry experience to the table, providing valuable insights into the practical applications of AI and ML.
Comprehensive Curriculum: Pydun offers a detailed, structured curriculum that takes learners from foundational concepts to advanced techniques, ensuring that every student is prepared to excel in the field.
Hands-On Learning: At Pydun, learning is interactive. Students work on real-world projects, solving problems that mirror those faced by companies in the industry.
Personalized Training: Pydun offers customized training programs, catering to both individual learners and corporate teams, ensuring that everyone gets the attention and resources they need to succeed.
Future-Ready Skills: With the rapid advancements in AI and ML, the skills you gain at Pydun will keep you ahead of the curve, enabling you to tap into exciting opportunities in one of the most dynamic fields today.
Conclusion
The legacy of Aryabhata and his invention of zero continues to shape the world of technology today. Zero is the silent enabler of all digital systems, from binary code to artificial intelligence, and it is this very concept that has allowed AI and ML to flourish in the modern era.
If you're ready to step into the future and unlock the potential of AI and ML, Pydun Technology Private Limited is here to guide you. With expert-led training, hands-on experience, and a focus on practical learning, Pydun ensures that you are equipped with the skills and knowledge needed to succeed in this exciting field. Embrace the future today and take the first step towards mastering AI and ML with Pydun Technology - where learning meets innovation.
#best programming course training in madurai#Artificial Intelligence and Machine Learning Courses#Artificial Intelligence Courses#Machine Learning Courses#AI Courses in Madurai#AI and ML Training in Madurai#AI Programming in Madurai#Machine Learning Training in Madurai#internship#best IT training in madurai
1 note
·
View note
Text
Optical Reflector
The working principle of a reflector is based on the law of reflection, where the incident light, reflected light, and normal line in the same plane, and the angle of incidence equals the angle of reflection. These reflector surfaces undergo special treatment, allowing light to reflect along predetermined paths, thereby changing the direction of light propagation.
The main parameters of reflectors include reflectance, surface roughness, shape, and size. Reflectance determines the efficiency of reflection for reflectors, while surface roughness affects the quality of reflected light. The shape and size of reflectors determine their operating mode and application range.
Company Name:Changzhou Haolilai Photo-Electricity Scientific and Technical Co., Ltd. Web:https://www.cnhll.com/product/optical-flat-mirror/optical-reflector/ ADD:No.10 wangcai road, Luoxi town,Xinbei district, Changzhou,Jiangsu, China. Phone:86-519-83200018 Email:[email protected] Profile:As a High-Tech enterprise in Jiangsu province, HLL boasts a talented team with intensive experience and professional technology. HLL has established Jiangsu Precision Optical Lens Engineering Technology Center and Jiangsu Enterprise Technology Research Center and obtained multiple patents for inventions, multiple utility model patents and multiple Jiangsu High New Tech Products.
0 notes
Text
Lal Kitab 2.0- The Synergy of Tradition and AI

"Lal Kitab,"is a set of astrological texts with roots in traditional Indian astrology. This unique system is renowned for its unconventional and pragmatic approach to astrology, distinguishing itself from classical Vedic astrology.
The origins of Lal Kitab are shrouded in mystery, adding an air of intrigue to its teachings. While some attribute its authorship to Pandit Roop Chand Joshi, others believe it was penned by an unknown Muslim saint.
Lal Kitab emphasises the significance of Rahu and Ketu, the lunar nodes, and suggests practical remedies, often involving rituals, charity, and specific items to be donated or used.
It continues to be consulted by individuals seeking insights into their lives and looking for simple and affordable remedies to address challenges and improve their fortunes.What sets Lal Kitab apart is its departure from the conventional use of complex horoscopes and birth charts. Instead, it focuses on the placement of planets in different houses and their influences, employing a symbolic language and metaphors to convey astrological principles.
Introducing Lal Kitab 2.0:
Lal kitab 2.0 is really not a book to publish, it is a synergy of Astrology with AI. IT is a combination of two different ages, it is a combination of art and science, no doubt astrology is pure maths but maths is still like an art. You need to learn this to be the master of this art. We believe that lal kitab 2.0 is more accessible, understandable and useful for every generation. When two most powerful sciences come together it creates a miracle to the normal world, though the astrological world is not a normal world, no no no we are not saying this is abnormal or something out of the ordinary it is an extraordinary art that can be learned with some attention and focus.
Now we do not have time to go to an astrologer and we don’t know who is right or who is wrong. When it comes to trust we trust the calculator more than our own mind, even though humans invented the calculator and it can never be as smart as the human brain but it is more accurate than the human calculation. Sometimes even the slightest mistake can change your birth chart completely. Use of AI saves your calculation, gives you perfection in timings and birth charts and the positions of stars, planets and everything you want to know.
Data Analysis:
You need to know a few things to create a perfect birth chart, Place of birth, Time of birth, and date of birth. If you have these things accurate then you will definitely have a perfect birth Chart without mistakes and AI improves the chances of accuracy to 100%. Astrology data analysis with AI involves leveraging artificial intelligence techniques to process, analyse, and derive insights from astrological data.
Remedy Recommendations:
Lal kitab always provided extraordinary but easily available home remedies to its followers but with the help of AI and its accurate calculation we believe we will be able to provide tailored remedies to every individuals,
Continuous Learning and Adaptation:
The process where AI models learn from new data or experiences continuously, allowing them to update their knowledge and improve performance.The ability of an AI system to adjust its behaviour or model parameters based on changing conditions or new information. Regular updates based on new data lead to performance improvements, allowing AI models to provide more accurate predictions or make better decisions. Many Astrological scenarios are dynamic and evolve over time. Continuous learning enables AI systems to stay relevant in such dynamic environments.
Ethical Considerations:
Astrological data often includes personal information. Clearly communicate to users how their astrological data will be used and obtain informed consent before collecting and analysing their information. AI ensures robust security measures to protect sensitive data from unauthorised access and breaches.AI systems should avoid imposing specific cultural perspectives.AI algorithms used in astrology are transparent and explainable.
AI Astrologer:
AI Astrologer is the synergy of the Lal Kitab and AI, The inventor of the AI Astrologer Gurudev GD Vashishth has put all of his knowledge of his earlier addition Lal Kitab amrit in AI Astrologer. AI Asrrologer is a revolution in the astrology world as this will save people from frauds and fake astrologists. AI is claimed to be 100% perfect in its predictions. AI Astrologers respects its readers privacy and understands that these question and answers can be private, When you use the kiosk or the AIAstrologer.com you may ask anything you want and everything is categorised in the app and you get the answers at your whatsapp in the Pdf form.
Conclusion:
The synergy of AI and astrology offers a new dimension to the age-old quest for understanding celestial influences on human life. Embracing this collaboration with ethical principles at its core will contribute to a trustworthy and responsible evolution of astrological practices in the digital age. As we navigate this uncharted territory, the pursuit of knowledge and wisdom remains a constant, guiding both traditional practitioners and technologists alike toward a harmonious coexistence of tradition and innovation.In conclusion, the integration of artificial intelligence (AI) with astrology marks a fascinating intersection of ancient wisdom and modern technology. As we delve into this evolving field, it is crucial to navigate it with ethical considerations, transparency, and respect for cultural diversity.
Visit : https://www.aiastrologer.com/
1 note
·
View note
Text
Comprehensive Introduction to Robotics Mechanics and Control
In a world increasingly influenced by technological innovation, the field of robotics stands out, paving the way for future advancements that could redefine various industries. The "Introduction to Robotics Mechanics and Control" serves as a foundational pillar for enthusiasts and professionals alike, seeking to understand the complex yet fascinating world of robotics. This comprehensive guide delves deep into the intricate mechanics underlying robotic applications and the control systems that ensure these machines can perform tasks accurately, efficiently, and flexibly. Understanding these concepts is not just for academic or industrial pursuits; it is a window into a future where robotics impacts every facet of our lives. The journey through "Introduction to Robotics Mechanics and Control" is akin to unlocking new levels of a sophisticated game, where each stage uncovers deeper, more complex mysteries and marvels of the robotic world. From the basics of design and movement to the nuanced algorithms that provide robots with almost human-like dexterity and decision-making capabilities, each page turns is a step towards not just understanding but inventing the future. It's not merely about machines; it's about the harmonious blend of physics, mathematics, and computer science that creates entities capable of changing the world. As we embark on this enlightening journey, it is crucial to remember that the field of robotics is ever-evolving. What may be a groundbreaking innovation today could become a standard feature tomorrow. Therefore, the "Introduction to Robotics Mechanics and Control" is more than a guide; it's a compass that directs curious minds towards uncharted territories waiting to be discovered. This exploration promises to challenge, inspire, and ignite a passion for a realm where science fiction meets reality. Grasping the Fundamentals of Robotics At the heart of understanding robotics is grasping the fundamental principles that govern how robots are designed, structured, and brought to life. The inception of any robotic system starts with mechanics, the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements. By studying mechanics in the context of robotics, one learns how robots move, interact with physical objects, and adhere to the laws of physics. These foundational insights are crucial for designing robots that can efficiently navigate and operate within their environment, whether it's on a factory floor, inside a laboratory, or on the surface of another planet. The mechanics of robotics also extends to the materials used in constructing robots. Different applications require various materials, each with unique properties that affect a robot's functionality and efficiency. Understanding these materials isn't just about knowing their physical properties; it involves insight into how they interact with motors, sensors, and other robotic components. This knowledge ensures the creation of robots that are not just functional but also durable, capable of withstanding the environments they operate in. Another fundamental aspect is the kinematics of robots, which deals with motion without considering the forces that cause it. Here, the focus shifts to the movement patterns of robots, how their parts coordinate and synchronize, ensuring smooth, calculated actions. Grasping this concept involves understanding geometric representations and transformations, joint parameters, and linkage descriptions that form the language of robotic movement. It's through mastering kinematics that one can predict and control a robot's behavior, a critical skill in the development and application of robotics. But mechanics alone doesn't bring a robot to life; it's the integration with control systems that propels these machines into action. Control systems in robotics help in managing, commanding, directing, or regulating the behavior of other devices or systems. These range from simple remote controls to complex neural networks, each serving a unique function in various robotic applications. Understanding control systems is pivotal in ensuring that robots can perform required tasks on their own, learn from their surroundings, and even make decisions in unpredictable environments. Exploring the Genesis of Modern Robotics The modern landscape of robotics didn't materialize overnight; it's the culmination of centuries of scientific achievements and technological advancements. The genesis of modern robotics can be traced back to the era of industrialization, a period marked by the birth of automation and mechanization. It was the quest to improve efficiency and productivity that led to the advent of machines designed to mimic and eventually surpass human physical capabilities. These initial steps were humble, with simple machines performing rudimentary tasks, but they set the foundation upon which contemporary robotics is built. As the 20th century progressed, so did the ambitions of inventors and scientists. The space race and the cold war provided unique platforms for rapid advancements in robotics. It was no longer about simple machines; the goal had shifted to creating entities that could think, adapt, and make decisions. This era saw the introduction of programmable robots, capable of being coded to perform various tasks, and the birth of artificial intelligence, a field that would redefine what robots could potentially achieve. This historical context is crucial, as it highlights the evolutionary journey of robotics, painting a picture of relentless human ambition and intellectual prowess. Significance of Mechanics and Control Systems Mechanics and control systems represent the heart and brain of robotics, respectively. Without mechanics, robots would be lifeless frames, and without control systems, they would be entities without purpose or direction. The significance of these elements cannot be overstated, as they collectively contribute to the efficacy, autonomy, and versatility of robots. With advanced mechanics, robots can navigate uncharted terrains, handle delicate objects, and perform tasks with precision that rivals or exceeds human capabilities. Control systems, on the other hand, breathe intelligence into robots. These sophisticated networks of algorithms and sensors enable machines to perceive their environment, process information, and respond with appropriate actions. The evolution of control systems has reached a point where robots can learn from past experiences, adapting their behavior in ways that were once the sole domain of living beings. This convergence of learning ability and autonomy is what's steering the current generation of robots towards new horizons of capabilities and achievements. In the realm of practical application, the synergy between mechanics and control systems is creating opportunities across diverse fields. From manufacturing plants and healthcare facilities to research labs and space exploration, the footprint of advanced robotics is ubiquitous. These systems are not just performing tasks but are also managing complex operations, solving intricate problems, and even exploring the mysteries of other worlds. The future of robotics, therefore, rests on further advancements in mechanics and control systems, driving forward the boundaries of what these extraordinary machines can accomplish. Delving into Robotics Mechanics Embarking on the "Introduction to Robotics Mechanics and Control" journey means immersing oneself in the detailed mechanics that form the backbone of every robot. Robotics mechanics is not a singular concept but a vast field that integrates various principles from traditional mechanics and applies them uniquely to robots. It encompasses everything from how robots move and interact with their environment to the very materials from which they are made. It is through these mechanics that robots can perform with the precision, efficiency, and flexibility that modern applications require. Understanding robotics mechanics is essential because it lays the foundation upon which all robotic functions are built. When we talk about robots, we often envision autonomous machines capable of carrying out complex tasks, sometimes in environments unsuitable for humans. However, behind this autonomy is a world of intricate mechanics working seamlessly to initiate motion, manage force, and maintain balance. Thus, delving into robotics mechanics means unraveling the complexities behind these autonomous capabilities. This exploration is fundamental to both current and future advancements in the field. Robotics mechanics doesn't remain static; it evolves with each technological advancement. With every new material discovered, every fresh insight into power systems, and every innovative motion technique developed, the mechanics of robotics grow increasingly sophisticated. This evolution expands the horizons of what robots can do, pushing the boundaries from the floors of manufacturing factories to the depths of space. However, this field isn't just about the robots themselves; it's also about the broader impacts these mechanical advancements have on industries and societies worldwide. As robotic mechanics advance, so too do the capabilities and roles of robots in various sectors. They're revolutionizing assembly lines, transforming healthcare, exploring unreachable cosmic territories, and doing much more. They're not just machines; they're harbingers of a new era, and it all starts with the mechanics that move them, the heart of robotics itself. Core Principles of Robotics Mechanics The journey through the core principles of robotics mechanics begins by peeling back the layers to understand the components and concepts that form a robot's mechanical basis. This foundation is rooted in classical mechanics, borrowing established principles and evolving them to suit the unique needs of robotic applications. Here, every piece, from the smallest screw to the most complex joint arrangement, plays a role in ensuring the robot functions as desired, offering a symphony of movement and capability that is both fascinating and revolutionary. Dissecting the Mechanics: From Levers to Pulleys The simplest elements of robotics mechanics draw from age-old mechanical concepts, including basic machines like levers and pulleys. These fundamental components might seem rudimentary, but they are integral to the complex movements and operations within a robot. Levers, for example, are crucial in imparting motion, offering mechanical advantages that are exploited to achieve force amplification in robotic arms or legs. Pulleys provide similar advantages, particularly in robots requiring linear motion, as they help reduce the energy needed to move objects, reflecting the utility and efficiency embedded in these classic mechanics. Understanding how these simple machines integrate into complex robotic systems reveals the genius of mechanical engineering in robotics. It’s not about reinventing the wheel but rather about using tried and tested mechanical principles to drive innovation. This deep integration of simple mechanics lays a solid groundwork, ensuring that regardless of how advanced or sophisticated robots become, they are grounded in reliable, time-tested mechanical laws. The blend of these classical mechanics with modern engineering practices is indicative of the evolution within the field. Today's robots might operate using advanced algorithms and be powered by cutting-edge technology, but beneath all that, they still rely on the fundamental principles of levers and pulleys, among other mechanical basics. It is this harmonious blend of old and new that enables the continuous advancement of robotic capabilities, making what was once thought impossible a reality today. The Role of Physics in Robotics Diving deeper into the mechanics necessitates an exploration of physics in robotics, as the two are inextricably linked. Physics provides the foundational laws upon which all robotic functions are based, from motion and energy to force and momentum. In the realm of robotics, these laws dictate how robots move, how they interact with objects, and how they can manipulate their environment. Without these fundamental principles, the precision and control we see in robots today would simply not exist. Roboticists regularly tap into various physics domains to optimize robotic functions. For instance, electromagnetism is crucial in operating motors and sensors, while principles from thermodynamics are used to manage a robot's heat generation and dissipation. Even quantum physics, with its insights into atomic and subatomic levels, finds applications in developing new materials and sensors for robotics. Understanding the role of physics in robotics also extends to anticipating and designing around the limitations these laws impose. It’s about striking a balance between pushing the boundaries of what's possible and respecting the unyielding constraints of the physical world. Robotics doesn't just apply physics; it dances with it, choreographing movements and capabilities that conform to and yet also challenge these universal laws. Through physics, we can predict how a robot would behave in different scenarios, control its actions with precision, and ensure its interaction with the physical world is consistent with established laws. This predictive and regulatory capability is pivotal, forming the bedrock upon which the reliability and efficiency of robots are built. Kinematics and Dynamics: The Motion Facilitators The principles of kinematics and dynamics serve as the navigators in the journey of understanding robotic motion. Kinematics focuses on motion description, control, and prediction without concern for the forces causing that motion. In robotics, this involves determining the paths and spaces a robot can move within, ensuring the robot's joints and appendages work in harmony to achieve smooth, coordinated movements. Dynamics goes a step further, bringing into consideration the forces that influence motion. This branch is crucial for understanding how to impart and control the movements of a robot. It’s not just about ensuring motion; it’s about guaranteeing stability, efficiency, and precision in these movements. When robots interact with their environment, whether it’s picking up a payload or maneuvering through uneven terrain, dynamics is key in controlling these interactions, ensuring they're not just successful but also safe and reliable. Together, kinematics and dynamics facilitate the seamless motion we observe in robots. They're pivotal in the design and operation stages, ensuring not only that robots move but that they do so with purpose and precision. Statics and Elasticity in Robotic Structures Delving further into the mechanical world of robots, statics and elasticity emerge as crucial fields of study. Statics deals with the mechanics of materials and structures in a state of rest or constant velocity. It’s vital for ensuring that a robot’s structure can withstand the loads and stresses it encounters without succumbing to wear and tear. Here, the focus shifts to analyzing and designing structures that offer the perfect balance between strength and flexibility. Elasticity complements this by focusing on materials’ ability to deform under stress and return to their original shape afterward. This property is invaluable in robotics, where components often need to withstand various forces without permanent deformation. Robots designed with elasticity in mind can endure more physical stress, elongating their operational life and increasing their reliability. Both statics and elasticity are integral to maintaining the structural integrity of robots. By understanding and applying principles from these fields, engineers can design robots that are not only more resilient and durable but also capable of performing more complex tasks in more challenging environments. Thermodynamics and Heat Transfer: Cooling Robotic Systems No exploration of robotics mechanics would be complete without addressing thermodynamics and heat transfer. Robots, like all machines, generate heat during operation, and managing this heat is crucial for maintaining optimal performance. Thermodynamics allows us to understand the heat generated within robotic systems, guiding the creation of mechanisms that can effectively dissipate this heat to prevent overheating and potential system failures. Heat transfer plays a complementary role, focusing specifically on how heat moves through different materials. In robotics, this is crucial for designing cooling systems that keep the robot’s internal temperature within safe limits. These systems might leverage conduction, convection, or radiation to transfer heat away from sensitive components, thereby safeguarding the robot’s functionality and durability. Together, thermodynamics and heat transfer form a critical defense mechanism for robots, protecting them from the dangers of their own operational heat. They ensure that robots can continue operating efficiently, even under high-stress conditions or during lengthy periods of activity. Fluid Mechanics in Robotics: Hydraulic and Pneumatic Systems The realm of fluid mechanics opens up a world of possibilities for robotic movement and power. Hydraulics and pneumatics, both rooted in fluid mechanics, have become fundamental in the field of robotics. Hydraulic systems use liquid fluid—often oil—in a confined space to transfer power from one location to another. These systems are prized in robotics for their incredible power, precision, and reliability, especially in heavy-duty robots that require significant force. Pneumatic systems, on the other hand, rely on gaseous fluids—typically air—under pressure. They are generally simpler and more flexible than their hydraulic counterparts, making them ideal for lighter, quicker tasks. Pneumatic systems are often found in robotic arms in manufacturing, where they perform repetitive tasks with speed and precision. Both systems showcase the versatility and potential of fluid mechanics in robotics. By harnessing the power of fluids, robots can achieve greater force and movement without a corresponding increase in size or weight. This ability makes robots more adaptable and capable, ready to meet the diverse demands of modern applications. Intricacies of Material Science in Robotics Material science forms the cornerstone upon which the tangible aspects of robots are built. This field goes beyond merely selecting materials for different parts of a robot. It involves diving deep into the properties of various materials—metals, polymers, composites—and understanding how these properties can enhance or impede a robot’s functionality. The right materials can make a robot stronger, more flexible, or more energy-efficient, creating possibilities for new applications and capabilities. Metals, Polymers, and Composites: Pros and Cons The discussion of materials in robotics mechanics invariably leads to the comparison between metals, polymers, and composites, each with its own set of advantages and disadvantages. Metals have been a staple in machinery for centuries, known for their strength and durability. In robotics, metals, particularly alloys, are valued for their ability to withstand high stress and temperatures, making them ideal for structural components and high-performance parts. Polymers, however, bring a different set of benefits to the table. These materials, made of long, repeating molecular chains, are generally lighter than metals and offer greater resistance to corrosion. They also possess a higher degree of flexibility, which can be advantageous in robots that require a wider range of motion or those that need to absorb high impacts. Composites Read the full article
0 notes
Text
If you’re wondering
The reason I don’t rag as often about the far right is very simple:
You already know their fucking problems. There’s nothing new. It’s just hitting the same notes on newspaper after newspaper, day by day, cherry picking the unpleasant history from the history textbooks in favor of all other history to emphasize a point.
You can’t find an internet news column that isn’t speaking all the real problems, and then when they run out of the social credit to keep blabbing about it, inventing new parameters to talk about them being a problem. Entire new metrics to go, “they’re outside of them, look how they’re transgressing now!”
Abortion, climate change, gun rights, racism, sexism, religious overreach, taxes and industrial regulations. Over and over again. Yes, “What about white supremacism and colonialism?” Covered under racism. “What about churches trying to take over publishing companies and use religious criteria on secular books?” Covered under religious overreach, as is judges trying to judge from biblical interpretations of what’s right according to god, not man. Religious folk trying to enforce their specific religious views on those that are not religious in the public sphere.
I don’t need to bring up the Trail of Tears or Manifest destiny. It is brought up ad nauseum. I don’t need to talk about how the Nazis were real and bad, or the very real holocaust, because it never stops being talked about by people constantly trying to use it to tangentially blame the modern american right wing for culpability and responsibility for it, casually, with no room for rebuttal and no room for argument, just zingers and then social faux passes if you zap back.
Deafening silence about left wing failures. They just attribute those to being, “right wingers of their day” dust their hands and move on. So even the failures of authoritarian left-wingers, become the responsibility of conservatives. Even the bullshit of the Soviet Union gets double thought of as, “State capitalist right-wing violence,” because many left-wingers feel state violence and leftism are paradoxical, so it must be right wing when practiced, and they reject the whole premise ideologically..
A million books in public school libraries about the Nazis, a quarter million published and stocked in the last year attributing Nazis to the right wing (despite the fact they very much were anti-capitalist and socialist, this is not disputable), and virtually no books added about how shit Soviet, Asian or African socialism was to the people living under the boots under Marxist principles. Just in case you might want any message other than, “Nazis = bad.”
But when left-wingers try to silent majority opposition to nuclear power because it, “doesn’t overturn the current private enterprise model of power” while screaming about how we just can’t do business as usual with coal and gas ahymore without destroying the planet, and aggressively filibust any conversation that isn’t the ones they WANT to have that make them look right and on the right side of history, that shit needs to be talked about. They aren’t being held accountable by themselves, and the modern right wing is so ineffectual and off in the weeds making itself look stupid, it’s incapable of even defending itself, let alone taking the far-left to task. When a literal communist agitator and false historian writes bogus articles about how the US military kicked off the great Native American genocide by spreading small pox through infected blankets, predating germ theory and germ warfare by a century, and gets less than a slap on the wrist for spreading that lie, no one repeats it.
I don’t need to, nor want to, defend right wingers. Not my guys, not my circus. I want to point out the hypocrisy and the garbage in the other alternative. Maybe incentivize conversations about candidates that aren’t coasting on, “not Trump”ism that can then just do whatever they want, because it’s them or, “Someone like Trump.”
0 notes
Text

Science & God’s Existence
By Author Eli Kittim
Can We Reject Paul’s Vision Based On the Fact that No One Saw It?
Given that none of Paul’s companions saw or heard the content of his visionary experience (Acts 9), on the road to Damascus, some critics have argued that it must be rejected as unreliable and inauthentic. Let’s test that hypothesis. Thoughts are common to all human beings. Are they not? However, no one can “prove” that they have thoughts. That doesn’t mean that they don’t have any. Just because others can’t see or hear your thoughts doesn’t mean they don’t exist. Absence of evidence is not evidence of absence. Obviously, a vision, by definition, is called a “vision” precisely because it is neither seen nor observed by others. So, this preoccupation with “evidence” and “scientism” has gone too far. We demand proof for things that are real but cannot be proven. According to philosopher William Lane Craig, the irony is that science can’t even prove the existence of the external world, even though it presupposes it.
No one has ever seen an electron, or the substance we call “dark matter,” yet physicists presuppose them. Up until recently we could not see, under any circumstances, ultraviolet rays, X – rays, or gamma rays. Does that mean they didn’t exist before their detection? Of course not. Recently, with the advent of better instruments and technology we are able to detect what was once invisible to the human eye. Gamma rays were first observed in 1900. Ultraviolet rays were discovered in 1801. X-rays were discovered in 1895. So, PRIOR to the 19th century, no one could see these types of electromagnetic radiation with either the naked eye or by using microscopes, telescopes, or any other available instruments. Prior to the 19th century, these phenomena could not be established. Today, however, they are established as facts. What made the difference? Technology (new instruments)!
If you could go back in time to Ancient Greece and tell people that in the future they could sit at home and have face-to-face conversations with people who are actually thousands of miles away, would they have believed you? According to the empirical model of that day, this would have been utterly impossible! It would have been considered science fiction. My point is that what we cannot see today with the naked eye might be seen or detected tomorrow by means of newer, more sophisticated technologies!
——-
Can We Use The Scientific Model to Address Metaphysical Questions?
Using empirical methods of “observation” to determine what is true and what is false is a very *simplistic* way of understanding reality in all its complexity. For example, we don’t experience 10 dimensions of reality. We only experience a 3-dimensional world, with time functioning as a 4th dimension. Yet Quantum physics tells us there are, at least, 10 dimensions to reality: https://www.google.com/amp/s/phys.org/news/2014-12-universe-dimensions.amp
Prior to the discoveries of primitive microscopes, in the 17th century, you couldn’t see germs, bacteria, viruses, or microorganisms with the naked eye! For all intents and purposes, these microorganisms DID NOT EXIST! It would therefore be quite wrong to assume that, because a large number of people (i.e. a consensus) cannot see it, an unobservable phenomenon must be ipso facto nonexistent.
Similarly, prophetic experiences (e.g. visions) cannot be tested by any instruments of modern technology, nor investigated by the methods of science. Because prophetic experiences are of a different kind, the assumption that they do not have objective reality is a hermeneutical mistake that leads to a false conclusion. Physical phenomena are perceived by the senses, whereas metaphysical phenomena are not perceived by the senses but rather by pure consciousness. Therefore, if we use the same criteria for metaphysical perceptions that we use for physical ones (which are derived exclusively from the senses), that would be mixing apples and oranges. The hermeneutical mistake is to use empirical observation (that only tests physical phenomena) as “a standard” for testing the truth value of metaphysical phenomena. In other words, the criteria used to measure physical phenomena are quite inappropriate and wholly inapplicable to their metaphysical counterparts.
——-
Are the “Facts” of Science the Only Truth, While All Else is Illusion?
Whoever said that scientific “facts” are *necessarily* true? On the contrary, according to Bertrand Russell and Immanuel Kant, only a priori statements are *necessarily* true (i.e. logical & mathematical propositions), which are not derived from the senses! The senses can be deceptive. That’s why every 100 years or so new “facts” are discovered that replace old ones. So what happened to the old facts? Well, they were not necessarily true in the epistemological sense. And this process keeps repeating seemingly ad infinitum. If that is the case, how then can we trust the empirical model, devote ourselves to its shrines of truth, and worship at its temples (universities)? Read the “The Structure of Scientific Revolutions” by Thomas Kuhn, a classic book on the history of science and how scientific paradigms change over time.
——-
Cosmology, Modern Astronomy, & Philosophy Seem to Point to the Existence of God
If you studied cosmology and modern astronomy, you would be astounded by the amazing beauty, order, structure, and precision of the various movements of the planets and stars. The Big Bang Theory is the current cosmological model which asserts that the universe had a beginning. Astoundingly, the very first line of the Bible (the opening sentence, i.e. Gen. 1.1) makes the exact same assertion. The fine tuning argument demonstrates how the slightest change to any of the fundamental physical constants would have changed the course of history so that the evolution of the universe would not have proceeded in the way that it did, and life itself would not have existed. What is more, the cosmological argument demonstrates the existence of a “first cause,” which can be inferred via the concept of causation. This is not unlike Leibniz’ “principle of sufficient reason” nor unlike Parmenides’ “nothing comes from nothing” (Gk. οὐδὲν ἐξ οὐδενός; Lat. ex nihilo nihil fit)! All these arguments demonstrate that there must be a cosmic intelligence (i.e. a necessary being) that designed and sustained the universe.
We live in an incredibly complex and mysterious universe that we sometimes take for granted. Let me explain. The Earth is constantly traveling at 67,000 miles per hour and doesn’t collide with anything. Think about how fast that is. The speed of an average bullet is approximately 1,700 mph. And the Earth’s speed is 67,000 mph! That’s mind-boggling! Moreover, the Earth rotates roughly 1,000 miles per hour, yet you don’t fall off the grid, nor do you feel this gyration because of gravity. And I’m not even discussing the ontological implications of the enormous information-processing capacity of the human brain, its ability to invent concepts, its tremendous intelligence in the fields of philosophy, mathematics, and the sciences, and its modern technological innovations.
It is therefore disingenuous to reduce this incredibly complex and extraordinarily deep existence to simplistic formulas and pseudoscientific oversimplifications. As I said earlier, science cannot even “prove” the existence of the external world, much less the presence of a transcendent one. The logical positivist Ludwig Wittgenstein said that metaphysical questions are unanswerable by science. Yet atheist critics are incessantly comparing Paul’s and Jesus’ “experiences” to the scientific model, and even classifying them as deliberate literary falsehoods made to pass as facts because they don’t meet scholarly and academic parameters. The present paper has tried to show that this is a bogus argument! It does not simply question the “epistemological adequacy” of atheistic philosophies, but rather the methodological (and therefore epistemic) legitimacy of the atheist program per se.
——-
#scientificmethod#Godsexistence#religious experience#visions#ThomasKuhn#scientism#technology#metaphysics#empiricism#firstcause#scientificdiscoveries#quantum mechanics#physicalphenomena#godandscience#metaphysicalphenomena#exnihilonihilfit#bertrand russell#immanuel kant#a priori#fundamentalphysicalconstants#paul the apostle#elikittim#thelittlebookofrevelation#principleofsufficientreason#leibniz#parmenides#ek#William Lane Craig#big bang#apologistelikittim
38 notes
·
View notes
Text
ladies and gentlemen this is ask dump no. 5
aw scrap here we go again!
answered asks include body modification as the opposite of empurata, Mutacons making bandages out of kibble, kibble used as furniture, numbers of Sweeps, a DILF alligator, RID15 Tidal Wave, a BIG infodump on dealing with the circus that is Iacon’s media, Cybertronian muppets, a WIP of Elita Infin1te (or rather her sword), and the many secret sufferings of Alpha Trion.
yea, sorta! body modification in SNAP is more limited than in canon. you can’t simply switch out your body like the total frame reformats of IDW or TFP, and losing a limb can be permanent if not healed in time. for the most part, the frame you have is the frame you’re stuck with, and those frames fall within specific parameters.
HOWEVER-
some modification and upgrades do exist! the most prominent here would be a prosthetic helm like Lugnut. if the processor is left intact and attached after a helm injury, a new helm can be sculpted, with extra optics to make up for the lower quality of artificial optics, and as visibly different as possible to differentiate from empurata. other replacements and prosthetics are common after debilitating injury where the original body part cannot be saved. whether or not the prosthetic is as good as the original depends on the individual and the specific injury. there are also functional medical upgrades, like thicker armor attachments, alt mode additions, etc. almost every upgrade is for the express purpose of improving one’s frame for their function, and there’s definitely a limit to them. you can’t give yourself new limbs if you only had four to begin with. a grounder cannot become a flier. the spark can only power so much mass in the frame, and some people have adverse reactions that mean the upgrades don’t take and must be removed.
this sort of relates to the next point here-
yes, with some caveats.
Cybertronians are a segmentary species, so they can detach some body parts for a bit without negative consequences, as long as that body part is reattached for revitalization and repair. many folks can do this without any medical assistance for the less integral kibble. for instance, Kup uses his tow arm as a walking stick, but he has to reattach it whenever he wants to go into alt mode, and if he doesn’t transform he still needs to reattach it for a couple hours every day at minimum. so if a Mutacon were to create a makeshift splint out of kibble and detach it, it would likely be fine, as long as they got that kibble back. otherwise, they’ve lost a whole chunk of their body that they can’t just regenerate.
for shifting armor to cover a wound without detaching it, that depends on the nature of the wound. if it’s ragged, large, or in areas with a lot of joints or movement, it might be difficult to shuffle around plating to cover it. a more superficial injury in a less delicate area would be easier
sort of! it’ll depend on the individual’s kibble, of course! double checking SNAP Bulkhead, i don’t think he could, because his kibble isn’t large enough. but Scylla could probably use her alt mode arms as a chair, Wreck-Gar has a built in backpack and belly bag, and of course the Necrobot uses his wingcloak as hands. different kibble with different bonus uses
the ideal number of Sweeps is seven, since less than that means they don’t have enough collective processing power to function optimally. more than seven, however, puts a strain on that collective processing power to smoothly operate so many at once. so there’s usually packs of as close to seven as they can get.
as to how many can just exist at the same time, it’s limited only by how many Scourge is willing to forge. he first invents them in s1e06 A Use for Army-building! An Upgrade to Sweeps. by the next episode they figure out that having dozens of them running around is... well it’s about as chaotic as having dozens of flying puppies with hands and weapons would be. in large numbers they’re very difficult to control. good thing Galvatron is excellent at commanding his new army!
(the post this is referring to) @oldboyjensenhinglemeier thanks Dilf Waitress, i can always rely on you
(the post this is referring to) i think that’s fantastic, i’d love to see a Cybertronian whale. imagine the size of the holding cell you’d have to have for him!
oooohohoho what a sticky subject. here’s a quick rundown on faction ideology to give you some context for how they operate and thus deal with the media. the heroes aren’t referred to as heroes, but rather as vigilantes at best and violent gangs in a turf war at worst. Froid has remotely diagnosed them with pathological dissent. at the same time, some folks have jumped on the market to make hero merch, and it becomes a very lucrative business for some. public opinion is constantly torn between fear and anger at how they do whatever they’d like and gratitude and admiration for how they throw themselves in harms way to prevent disaster and save people. it’s really a giant mess all the time that changes by the day.
there is of course the whole snafu surrounding the media’s portrayal of the Elite Guard as a backup team for the Autobots, and Elita 1 as Optimus’ sidekick. and Elita 1 is Not Happy about that. Elita 2 is startlingly good at winding the reporters around her little finger and always seems to know just what to say, whereas Elita 3 just grumbles at the cameras, even sometimes demanding they respect boundaries or be locked in the nearest building with the use of her powers. Elita 4 barely notices them unless she’s in the mood to prank someone, and Elita 5 just avoids them, as they tend to dramatize her size and thus her danger. given their excellent teamwork and how they’re (mostly) in favor of reform instead of anarchy, the Elite Guard would actually have a good shot at getting along with the news, except they bow to precisely no one, including the people wanting to interview them, so instead they come across as a standoffish and self-serving clique with dangerous habits
the Decepticons are in the bad-boy limelight and they love it. well, at least Galvatron, Hellscream, and Thunderblast do. Galvatron takes advantage of every opportunity to pontificate on the evils of society and the right to rise up for freedom. broadcasters have learned to cut the cameras as soon as he starts speaking so his ideas don’t get the chance to spread too far. Hellscream cares less about principles and more about scaring the living daylights out of every reporter he sees, often leaving them with cracked equipment and ringing audials from the sheer destructive power of his voice. Thunderblast just wants to preen in all the attention and boy does she get it. Cyclonus actively avoids most gawkers, Scourge talks too long and complicated to make good news, Drift either ignores them or sends them away with some lofty spiritual advice, and Triptych is dangerously unpredictable so most reporters have learned to stay away from him.
the Predacons came into existence in a negative light, and they were grimly prepared for it. after all, Sixshot used to be a Decepticon, and their falling-out and defection caused quite a stir. when Abominus first appeared, the fearful reaction of the public to such an ‘abomination’ is actually how he chose his name in the first place. Airachnid loves tormenting reporters with nuclear-grade sarcasm and subtle threats, but if anyone makes her truly mad she’ll string them up in her web cabling and leave them hanging. she also flaunts that cabling by using her darts to knit nets, shawls, and other decorations, despite the fact that getting cabling tangled up in seams and joints can lead to something called entrapment protocols, mentioned in the seventh ask here. Enforcers use capture equipment designed to trigger entrapment protocols, so her mimicry of that as nothing more than a casual accessory is a big ‘frag you’.
Soundwave.... is a category of his own. he only comes into being in the fourth season, but the media soon learns to quake at the thought of encountering Soundwave, and his minicons are little better. there’s at least one instance where he Rosanna-rolls the entirety of Iacon.
the Autobots keep wavering between ‘the only true good ones of all these vigilantes’ and ‘the worst possible people in the world, hide the children, lock the doors’ in the eyes of the media. Optimus does his best to treat everyone fairly, and the Mistress usually has something encouraging to share. much like Galvatron but for completely opposite reasons, broadcasters have learned to cut cameras when Ultra Magnus starts talking, because his encyclopedic knowledge of law means he regularly lists every instance of malpractice, abuse, illegality, and disrespect that he sees in the average reporter, Enforcer, or politician, which is not the kind of upbraiding that would serve the propaganda machine. however, it does get him the attention of Tyrest, who leverages legality and public opinion to try and draw Ultra Magnus into an agreement during s3e03- A Councilmember’s Boon! An Upgrade to Legality. Rodimus is a chaos beast who has been known to snatch cameras for selfies. it’s kind of a tossup as to whether Cheetor will be going slow enough to show up in the footage or not.
now, i can’t talk about the media without mentioning the feral force of nature that is Rewind. the best of the best, he’s the only one willing to brave the battlefields for an up-close look, constantly endangering himself in order to get the freshest scoop. he might not always hold opinions in line with the mandated propaganda about these vigilantes, but the media lets him get away with it, since he’s the most successful at getting them more news. this has caused him to be targeted at least once, unfortunately.
love this question. love it. you know those lil remote controlled robot dogs, or things to that effect? i’m imagining that’s what Cybertronian muppets are like, since they can create robotics and animatronics with a lot more finesse and ease than we can. in fact, making fabric is probably harder for them than robotics, since they don’t have the same materials as we do to work with. but anyway, these muppets wouldn’t be limited by what a hand can do to puppet them around, being instead remote controlled from off stage, so i don’t know if they’d have that kind of visual gag. maybe instead there would be fourth-wall breaking where one muppet snatches the remote of another?
the painful thing about this answer is that i have a design i’m happy with EXCEPT FOR THE HELM i have sketched and resketched a dozen different ideas ugh. the body looks fine, all five of them combined in a way that makes sense to me, but i just CANNOT get the helm right i’m so angry. anyway here’s the Cyber Caliber, all of their swords combined into one massive weapon
the more accurate question is, what hasn’t happened to him. he’s been through a lot, the poor mech. but i’ll list some things for you:
that one time he had a sibling be erased from reality
that one time he had to murder another sibling because they decided evil was fun
that one time a fragging beachball stole his work
the fact he doesn’t know if his twin is alive or not
that one time he was a junker running for his life
that one time he was too late to save the Terminus Blade, and it was stolen
that one time his pride and joy, the Athenaum Sanctorum, was destroyed, and everything archived there was lost
that other time the same fragging beachball stole his work
that other time he was a junker hiding for his life
the fact that the theft of his diary started a whole new branch of religion and he has to read his own words as if they’re sacred
the fact that the title of Trion was in fact derived from his diary, and the sheer painful irony of being given the title of Trion.
that one time he had to rip off some fingers to fit in
that one time Trypticon showed up, awhile before the JAAT was founded, and he had to take it on alone
that other time Trypticon showed up when the JAAT opened and he had to hand out some precious relics to children to protect the school
aaaaand his current reason for drinking! the fact that of all twenty-something heroes running around, he only knows who THREE of them are because he only gave out THREE RELICS! and relics just keep disappearing from the collection he’s guarding
someone help him he is not having a good time. and it’s only going to get worse...
#ask dump#worldbuilding#cybertronian biology#Empurata#mutacon#sweeps#gatoraider#cybertron#cybertronian culture#autobot#elite guard#decepticon#predacon#rewind#elita infin1te#Alpha Trion#faculty#i've given so much away about alpha now sheesh#well not really i guess but ITS STILL A LOT
26 notes
·
View notes
Text
Laser and Light Treatment of Acquired and Congenital Vascular Lesions
IPL treatment of PWS
IPL devices are broadband filtered xenon flashlamps that work based on the principles of selective photothermolysis. The emission spectrum of 515–1200 nm is adjusted with the use of a series of cut-off filters, and the pulse duration ranges from approximately 0.5 to 100 msec, depending on the technology. The first commercial system, Photoderm VL (Lumenis, Yokneam, Israel) became available in 1994, and has been used to treat vascular anamolies. Another, IPL Technology (Danish Dermatologic Development [DDD] Hoersholm, Denmark) with a dual mode light filtering has also been used to treat PWS. Many other IPL system have recently been developed, and the appropriate parameters for congenital vascular lesions are being developed. The IPL has been used successfully to treat PWS (Fig. 39.7),78–80 but pulsed dye laser remains the treatment of choice.
IPL technology has also been used to treat pulsed dye laser-resistant PWS. In the study by Bjerring and associates seven of 15 patients achieved over 50% lesional lightening after four IPL treatments. Most of these patients had lesions involving the V2 dermatome (medial cheek and nose), which are relatively more difficult to lighten. Six of seven of these patients showed over 75% clearance of their PWS. A 550–950-nm filter was used with 8–30-msec pulse durations and fluences of 13–22 J/cm2 to achieve tissue purpura. The 530–750-nm filter can also be used with double 2.5-msec pulses, with a 10-msec delay and fluence of 8–10 J/cm2. Epidermal cooling was not required. Treatment resulted in immediate erythema and edema, and occasional crusting. Hypopigmentation was observed in three patients, hyperpigmentation in one patient, and epidermal atrophy in one patient.
The basics of body fat
Let’s start with the basics. Not all fat is created equal. We have two distinct types of fat in our bodies: subcutaneous fat (the kind that may roll over the waistband of your pants) and visceral fat (the stuff that lines your organs and is associated with diabetes and heart disease).
From here on out, when we refer to fat, we are talking about subcutaneous fat, as this is the type of fat that cryolipolysis targets. A recent study showed that the body’s ability to remove subcutaneous fat decreases with age, which means we are fighting an uphill battle with each birthday we celebrate.
From popsicles to freezing fat
Cryolipolysis machine — which literally translates into cold (cryo) fat (lipo) destruction (lysis) — was invented, in part, by observing what can happen when kids eat popsicles. No kidding here. The cofounders of this process were intrigued by something called “cold-induced fat necrosis” that was reported to occur after young children ate popsicles that were inadvertently left resting on the cheek for several minutes. Skin samples taken from pediatric patients like these showed inflammation in the fat, but normal overlying skin. Thus, it appeared that fat may be more sensitive to cold injury that other tissue types.
HOW DOES IT WORK?
Coolplas Fat Freeze Machine uses rounded paddles in one of four sizes to suction your skin and fat “like a vacuum,” says Roostaeian. While you sit in a reclined chair for up to two hours, cooling panels set to work crystallizing your fat cells. “It’s a mild discomfort that people seem to tolerate pretty well," he says. "[You experience] suction and cooling sensations that eventually go numb.” In fact, the procedural setting is so relaxed that patients can bring laptops to do work, enjoy a movie, or simply nap while the machine goes to work.
WHO IS IT FOR?
Above all, emphasizes Roostaeian, CoolSculpting is “for someone who is looking for mild improvements,” explaining that it’s not designed for one-stop-shop major fat removal like liposuction. When clients come to Astarita for a consultation, she considers “their age, skin quality—will it rebound? Will it look good after volume is removed?—and how thick or pinchable their tissue is,” before approving them for treatment, because the suction panels can only treat the tissue it can access. “If someone has thick, firm tissue,” explains Astarita, “I won’t be able to give them a wow result.
WHAT ARE THE RESULTS?
“It often takes a few treatments to get to your optimum results,” says Roostaeian, who admits that a single treatment will yield very minimal change, sometimes imperceptible to clients. “One of the downsides of [CoolSculpting] is there’s a range for any one person. I’ve seen people look at before and after pictures and not be able to see the results.” All hope is not lost, however, because both experts agree that the more treatments you have, the more results you will see. What will happen eventually is an up to 25 percent fat reduction in a treatment area. “At best you get mild fat reduction—a slightly improved waistline, less bulging of any particular area that’s concerning. I would emphasize the word mild.”
WILL IT MAKE YOU LOSE WEIGHT? "None of these devices shed pounds,” says Astarita, reminding potential patients that muscle weighs more than fat. When you’re shedding 25 percent of fat in a handful of tissue, it won’t add up to much on the scale, but, she counters, “When [you lose] what’s spilling over the top of your pants or your bra, it counts.” Her clients come to her in search of better proportions at their current weight, and may leave having dropped “one or two sizes in clothing.”
Although the mechanism of cryolipolysis is not completely understood, it is believed that vacuum suction with regulated cooling, impedes blood flow and induces crystallisation of the targeted adipose tissue with no permanent effect on the overlying dermis and epidermis. This cold induced ischaemia may promote cellular injury in adipose tissue via cellular oedema and mitochondrial free radical release. Another theory is that the initial insult of crystallisation and cold ischaemic injury is further perpetuated by ischaemia reperfusion injury, causing generation of reactive oxygen species, elevation of cytosolic calcium levels, and activation of apoptotic pathways.
Whichever the mechanism of injury, adipocytes undergo apoptosis, followed by a pronounced inflammatory response, resulting in their eventual removal from the treatment site within the following weeks. The inflammatory process sees an influx of inflammatory cells at 14 days post treatment, as adipocytes become surrounded by histiocytes, neutrophils, lymphocytes, and other mononuclear cells. At 14-30 days after treatment, macrophages and other phagocytes envelope and digest the lipid cells as part of the body’s natural response to injury. Initial concern was that cholesterol, triglycerides, low density lipoproteins (LDLs) and high density lipoproteins (HDLs), bilirubin and glucose levels were affected, however these have been shown to all stay within normal limits following the procedure.
Four weeks later, the inflammation lessens and the adipocyte volume decreases. Two to three months after treatment, the interlobular septa are distinctly thickened and the inflammatory process further subsides. Fat volume in the targeted area is apparently decreased and the septa account for the majority of the tissue volume.
Patients and treatment areas
Although all studies show reduction in every area examined, it is still unknown what areas are most responsive to cryolipolysis. Various factors may play a role in the degree of fat reduction observed after cryolipolysis. The vascularity, local cytoarchitecture, and metabolic activity of the specific fat depots in questions may play a role.
There is lack of substantial research to identify the ideal patient or even the ideal area to be treated. Given a modest (yet significant improvement of up to 25% reduction in subcutaneous fat), it is thought that the best candidates are those within their ideal weight range and those who engage in regular exercise, eat a healthy diet, have noticeable fat bulges on the trunk, are realistic in their expectations, and are willing to maintain the results of cryolipolysis with a healthy, active lifestyle.
New design Cryolipolysis is safe for all skin types, with no reported pigmentary changes, and is safe for repeated application. Ferraro et al. suggest that patients who require only small or moderate amounts of adipose tissue and cellulite removal would benefit most from cryolipolysis treatment. Contraindications include cold-induced conditions such as cryoglobunaemia, cold urticaria, and paroxysmal cold haemoglobinuria. Cryolipolysis should not be performed in treatment areas with severe varicose veins, dermatitis, or other cutaneous lesions.
HIFU Decide Guide: Which Device is for you?
High-Intensity Focused Ultrasound also known as HIFU machine is a non-surgical, non-invasive treatment procedure that tightens and lifts the skin using ultrasound energy. It is taken to be a safe and effective procedure for tightening the facial skin.
HIFU treatment procedure has loads of advantages over the traditional means of facial improvement: painless, zero incisions, scarring, and recovery time. In addition, it is far less expensive too.
Granted, HIFU is a popular means of looking better. Little wonder that there are tons of clinics and spas that offer services they claim can make you look better. However, that's not enough reason for you to jump on the bandwagon of folks searching for ways to look better
Why Choosing The Right HIFU Device is Important?
Be sure to get a professional assessment before you start your HIFU treatment. This is because a professional and objective assessment from a specialist will help you discover the best form of treatment using the right multifunctional HIFU that is suitable for you.
Ultrasound technology is completely safe and has been used in medicine for decades. It works by contracting and shortening muscle fibers, thus causing the lifting and tightening effect that makes the skin look better.
Lifting, tightening, and fats melting are natural processes that result in a slimmer face, reduced jowl line & facial tightening. Confused about how to go about the HIFU facelift treatment in Singapore?
You will be able to decide on the best HIFU device to use when you finally decide to undertake the treatment. Let’s look at some of the most common HIFU devices in Singapore.
What is the working principle for HIFU vaginal machine?
The vaginal HIFU machine uses an noninvasive ultrasonic focusing technique to focus on the mucous membrane fibrous layer and muscle layer directly. Using ultrasonic waves as the energy source and taking advantage of its penetration and focus, the system will send out ultrasonic energy focusing in the the lamina and muscle fiber layer in a predetermined depth. A higher intensity of ultrasonic region, called focus region, is formed. In 0.1 second, the temperature of the region can reach to above 65 ℃ , so the collagen is reorganized and the normal tissue outside the focal region is undamaged. Therefore, the desired depth layer can obtain the ideal effect of collagen concentration, reorganization and regeneration. Ultimately, the mysterious effect of vagina tightening is achieved.
What is the functions for each cartridges?
1. Vagina tightening head
4.5mm heads produce the energy directly to the SMAS , make it thermal coagulation , make the SMAS tighten and lifting, improved the muscle structure from deep to shallow, make better to help the muscle layer restore elasticity and tighten.
3.0mm head the ultrasound penetrates to the skin under the depth of 3.0mm, aim at activating the dermal layer’s collagen, effectively enhance the effect of the consolidation of the outline, but also shrink large pores and reduce the appearance’s wrinkles.
Knowing that you are interested in 3D Hifu machine, we have listed articles on similar topics on the website for your convenience. As a professional manufacturer, we hope that this news can help you. If you are interested in learning more about the product, please feel free to contact us.3D HIFU is ultrasound energy distance of width, length and depth, which more Comprehensive, three-dimensional. Directly delivers heat energy to skin and subcutaneous tissue that can stimulate and renew the skin's collagen and thus consequently improving the texture and reducing sagging of the skin. The high quality handle with German imported motor can be used for life. Total 8 cartridges supply treatmetn for whole body. 1.5mm is for the forehead and around the eyes, 3.0mm is for the dermis layer--face treatment, 4.5mm is for the SMAS layer--face treatment, 6.0mm/8mm/10mm/13mm/16mm for body fat layer. Every cartridge can reach 20000 lines lifetime.
4D HIFU machine uses the power of High intensity focused ultrasound to safely lift and tighten skin. High intensity focused ultrasound is a form of energy that is significantly different than light such as IPL and Lasers or Electrical (Radio-Frequency) energy. HIFU, protects the skin surface, whilst precisely penetrating at deeper depths and higher temperatures than Radio Frequency for example, treating beyond the Dermis and Foundation layers, where structural weakening starts.
Tissue at the target point is heated to 65°C, Thermal Heat is created with the skin tissue creating both spaced ‘wounds’ and cellular friction - which in turn promotes healing, immediately contracts collagen and stimulates a rapid production. Over the next 90-180 days, the wound-healing response stimulates long-term tissue and leads to further lifting and tightening, with results that can last years.
4D HIFU also helps to improve the tone and all the features of your face such as your eyes, cheeks, mouth, chin and skin also making it a viable alternative to Botox, with the benefit of being able to maintain facial expression. Excellent for post surgical face lift to maintain the lift and treat blood stasis, scarring, and numbness.
This mode could minimize treatment time achieves the great anti-aging result. It will be leading the new treading in the market, and help you to expand your business.
2 notes
·
View notes
Text
FM Synthesis: Everything You Need to Know to Get Started
FM Synthesis: Everything You Need to Know to Get Started: via LANDR Blog
Synthesizers are some of the most inspiring music machines around.
The fact that you can build complex musical textures from scratch inside a machine will always feel like magic.
When it comes to synthesis, many producers are familiar with the basics. Concepts like oscillators, filters, and envelopes have made their way into many musicians’ daily practice.
Those components all belong to a method called subtractive synthesis. But there’s a whole other, more mysterious world of synthesizers out there.
I’m talking about FM synthesis. Many producers think FM is dated and overly complex. After all, it comes from an era where most synths were bulky black boxes with tiny menus instead of real knobs.
But FM is one of the most interesting types of synthesis out there. If you’re looking for new and fresh synth sounds to bring into your music, there’s almost no better choice than FM.
In this article, I’ll explain what FM synthesis is and how to start using it in your music.
Let’s get started.
What is FM synthesis?
FM synthesis is a method of generating complex timbres by modulating the frequency of one sound with another.
FM synthesis is a method of generating complex timbres by modulating the frequency of one sound with another.
FM was invented by John Chowning at Stanford University in the late 60s, but it didn’t become popular until Yamaha released the DX7 synthesizer based on the technology in 1983.
The DX7 sold thousands of units and went on to define the sound of synths in the 1980s.
What is FM synthesis good for?
FM is a unique style of synthesis that works well for certain sounds.
You probably recognize a lot of the classics from when it was most popular in the 80s, but there’s much more out there
FM synthesis works great for these types of sounds:
Instruments with complex attack like electric pianos, bells and mallets
Aggressive bass that punches through the mix
Icy atmospheric pads
Plucked strings or distorted sounds
How does FM synthesis work?
FM synthesis uses the principle of frequency modulation to create its unique sounds.
The term frequency modulation sounds a little scary, but you’re probably already using it in its simplest form.
Frequency modulation is just like any other kind of modulation you would create using LFOs in your subtractive synth—with one key difference.
If you need a refresher, here are the basics. A basic oscillator waveform on its own isn’t very interesting.
It needs movement and action to make it interesting. That’s where modulation comes in. Modulation creates change over time.
You use a special kind of oscillator called an LFO to modulate other parameters on your synth. Imagine you had a third hand that could turn a knob up and down at exactly the same rate every time—that’s LFO modulation!
Applying an LFO to oscillator pitch (or frequency) will give you a vibrato sound. That’s the simplest form of FM.
Applying an LFO to oscillator pitch (or frequency) will give you a vibrato sound. That’s the simplest form of FM.
The speed of the vibrato effect is determined by the frequency of the LFO. Its intensity is determined by the LFO amount.
This is essentially the same process that takes place within an FM synthesizer, although as I mentioned above, there’s a difference.
As you increase the frequency of the LFO more and more, it starts to sound less like vibrato and more like a whole other sound altogether.
That’s where things get interesting. At frequencies in the audible range and above, this type of modulation can create timbres never heard in synthesis before the dawn of FM.
I’ll show you how.
Operators
Modern day FM can all be traced back to the DX7. Even today, FM synths are based on the same basic components—operators, algorithms, carriers and modulators.
I’ll go through each one and explain how they work.
Operators are the basic building blocks inside an FM synth.
Think of each operator like a tiny little synthesizer with its own oscillator and envelope generator. Except each of these mini synths can only use a basic sine wave oscillator shape.
The classic FM synths had a total of six operators—that’s a lot of synthesis power in one unit.
Hot tip: Some of the newest FM synths on the market have only four operators. You might think this is less flexible, but modern day FM operators can often start with waveforms other than sine. As I’ll explain later, that simplifies the process of building up complex sounds and makes using FM even easier to get started with!
Carriers and Modulators
To create frequency modulation you need at least two sound sources. One to do the modulating and one to be modulated.
In an FM synth, both of these roles are played by operators.
The operator that does the modulating (think of it like the LFO in the previous example) is called the modulator.
The operator that gets modulated is called the carrier.
The magic of FM comes from changing the operator frequencies, envelopes and arrangement within a patch.
Algorithms
Speaking of operator arrangement, the way each carrier or modulator is connected to the other operators makes a big difference to the sound.
Which operator modulates which? How many operators are in use? Do they each modulate more than one operator?
With six of them in the mix, arranging operators individually would get overwhelming.
That’s why Yamaha decided to implement a set number of fixed operator arrangements for the most useful configurations.
These preset routings are called algorithms.
They’re the funny block diagrams depicted on the front panel of the DX7 if you’ve ever seen one in person.
The algorithms may look complex, but here’s all you need to know to understand them. The bottom row of operators in any given algorithm are the carriers. The rest are modulators.
The bottom row of operators in any given algorithm are the carriers. The rest are modulators.
The carriers play a similar role to the oscillators in a traditional subtractive synth.
For example, let’s take a look at algorithm five on this chart of four operator algorithms.
In this example each of the two carriers has one operator modulating them.
Setting these modulators to different frequencies will produce two distinct timbres.
Changing the level of each carrier will adjust the blend of timbres the same way mixing two oscillator waveshapes works in a subtractive synth.
Envelope generators
Of course, if the sounds you made with FM had no change in amplitude over time, they would get pretty boring.
Subtractive synths have envelope generators for the VCA and often the filter too.
That’s how you can turn a sound from a flowing sustained pad into a sharp percussive pluck.
EGs work the same way in FM, there’s just more of them—one per operator to be precise!
For carriers, the envelope generator works just like you’re used to in subtractive synthesis.
In the four operator example from above, setting both carriers for a slow attack and long release will create a pad sound—simple.
But modulators can have envelopes too. Let’s go back to basic LFO vibrato to understand how that works.
The LFO frequency becomes the vibrato rate, but the LFO amplitude becomes the vibrato depth.
Remember that the envelope generator makes changes to the amplitude of a sound over time.
If you added a pad type envelope to this LFO, the vibrato rate would remain constant, but its intensity would gradually build over time and then slowly diminish when you release the key. More on that later.
Ratios
One of strangest things about FM is how unpredictable it seems. Randomly choosing modulator frequencies creates some pretty chaotic sounds.
But you’ll rarely use this kind of fixed modulator frequency for programming most types of FM sounds.
Instead, the modulator frequency will most often be a multiple of the carrier. This makes it much easier to anticipate what type of sound you’ll create.
Here’s the rule: The higher the ratio, the more complex harmonics will be generated.
The higher the ratio, the more complex harmonics will be generated.
For example, a 1:1 ratio for the frequency of the modulator to the carrier will produce a sound with only slightly more harmonics than than a sine wave—in fact it will sound pretty similar to a saw wave if the modulator level is high enough!
A 2:1 ratio will produce something close to square wave. As the ratio gets higher, more and more complex tones will come out.
Ratios of 18:1 or higher are necessary to create some of the classic bell and tine timbres FM is known for.
Feedback
The last thing I’ll cover here is operator feedback. It’s not essential to the basics, but most FM synths include it, so it’s worth spending some time to explain.
The exact definition is a bit complicated, but all you need to know is that feedback means the operator’s frequency becomes its own modulator signal.
All that means is feedback is just another way to create harmonically rich waveforms from the basic sine waves available in FM.
As you turn up the feedback the operator’s output will approach a saw wave.
Feedback is indicated with a looping signal path around the operator. In traditional 6 operator FM there’s normally just one path with feedback per algorithm.
FM synthesis tutorial: How to program a bell sound
If all this is sounding a bit academic, don’t worry. There’s plenty of great practical and musical applications of FM.
To help you make your first patch in FM, I’m going to go through how to create one of the most recognizable FM sounds using a great free VST synth called Dexed.
In this tutorial I’ll be creating a bell sound.
Let’s get started:
1. Initialize a patch
Click the init patch button near the preset browser on Dexed to start with a blank patch.
Now if you play the keyboard, you should hear a basic sine wave as you play the keys.
https://blog.landr.com/wp-content/uploads/2020/03/step1.mp4
2. Choose an algorithm
The next step is to select an algorithm to base your patch around. That way you’ll start with a clear idea of which operators are routed where and how they affect your sound.
This example requires only four operators, so you’ll only be paying attention to the bottom two rows of the algorithm.
Since it’s not a complex patch, let’s stick with the default algorithm.
3. Add a modulator
Right now your patch only has a single carrier with no modulator affecting it.
Mouse over to operator 2, which is directly above operator 1 in the algorithm.
As you turn up its level, you’ll start to hear your sine wave becoming more saw-like.
https://blog.landr.com/wp-content/uploads/2020/03/step3.mp4
4. Change the ratio
The default ratio in this FM synth is 1:1. Let’s turn it up dramatically to hear the effect of increasing the ratio.
With the ratio set to 18:1, you can hear all kinds of harmonics are now present. Some are in tune, but others are highly dissonant. Turn down the level of this operator slightly to smooth out the sound slightly
This sound might not seem very pleasing right now, but the noisiness and unruly harmonics are key to simulating the complex attack of a bell sound.
https://blog.landr.com/wp-content/uploads/2020/03/step4.mp4
5. Adjust the envelope
Reduce the sustain and decay time of the carrier’s envelope until it’s only a short pluck. Your goal is to use this complex attack for the onset of the bell sound only.
Dexed’s envelopes use the same control scheme as the original DX7. They might take some getting used to, but you can use the graphical display to help you.
https://blog.landr.com/wp-content/uploads/2020/03/step5.mp4
6. Add another carrier
With this basic algorithm, adding another carrier is as simple as turning up the level of the other operator on the bottom row.
In this case, it’s operator 3. Turn up its level so that you begin to hear its sine wave tone alongside the short attack from the other carrier.
Now the initial attack provides interest during the onset and the sine wave provides the body.
https://blog.landr.com/wp-content/uploads/2020/03/step6.mp4
7. Adjust the other envelope
The only step left to complete this bell tone is to give the sine wave carrier a more bell-like envelope.
Increase the release time on the envelope of the operator 3 so that it fades out gently and gradually.
https://blog.landr.com/wp-content/uploads/2020/03/step7.mp4
FM adventure
FM is one of the youngest methods for synthesizing sound from scratch. That means it’s still full of unexplored sound design potential.
But whether you’re looking for brave new sonic territory, or you just love the iconic sounds of the 80s, there’s plenty to explore with FM if you’re willing to learn.
And once you dive in, you’ll find it’s not that scary after all.
Now that have the basics of FM down, get back to your DAW and start building your first patch with FM.
The post FM Synthesis: Everything You Need to Know to Get Started appeared first on LANDR Blog.
from LANDR Blog https://blog.landr.com/fm-synthesis/ via https://www.youtube.com/user/corporatethief/playlists from Steve Hart https://stevehartcom.tumblr.com/post/614038654707531776
2 notes
·
View notes
Text
Submitted via Google Form: Is it possible for a (kinda) isolated civilization to got a industrial revolution? This civilization spread around a whole continent with various resources: grain, wood, minerals, etc. Has different ethnic groups (principally separated by zone: north, center and South). The government it’s like a federal monarchy (I don’t know if that’s really a thing or not), that means that exist a central power but there are principles (is this the English word for the government of a prince? If not I apologise, English it’s not my first language) or provinces (specifically on the North and the South of the continent), that rule and regular their own lands, but answer to the king.
Also, this civilization has a long history, but had not a bronze age crisis (or a similar thing to). I don’t know if that can affect the technological development or not. Although the civilization has its dark ages that after motivated cultural, social and technological changes. They have a cultural capital in the South, where art and science are cultivated. Even there are sapient people of other continents. Reading and writing are beginning to spread around all social classes, but maintains the majority on nobility and sacerdotes.
Also, a civil war would stop or accelerate the industrial process or not?
I apologise for the large of my questions. But I tried to give you the more information I could.
Tex: It is indeed possible for an isolated civilization to have an industrial revolution, though they might follow a different path to it. I will get to explaining some particular nuances of that sort of “revolution” in a bit, but I would like to address some other things in your question first.
Federal monarchies are a thing, albeit rare, and usually constrained to theory because of the difficulty in real-world execution of the idea. Lands governed by a prince are called principalities in English, though sometimes also princedom (following the same -dom rules as kingdom).
The Late Bronze Age collapse - if that is to what you’re referring to? - is a dark age unto itself, and rather the opposite in terms of societal effect to a technological revolution. The causes and timing of a dark age will calculate the breadth and depth of impact to your society; in general, the more momentum a society has in terms of culture, science, and international relations, the more impactful and difficult it is to recuperate from a dark age.
“Reading and writing are beginning to spread around all social classes, but maintains the majority on nobility and sacerdotes.” What is the history behind this? Has the literacy rate risen gradually and steadily, or has it been irregular, marked by various events that would preclude a regularly increasing rate of literacy? If literacy has been restricted to the noble and religious castes, why so? Has your society had a Carolingian Renaissance before its industrial revolution?
The interesting thing about industrial revolutions is that they are built upon previous technological and scientific leaps - Wikipedia’s timeline of historic inventions parses them by era, and hopefully is available in your native language.
One of the main points of the eponymous Industrial Revolution is the shift from hand production to mechanical production. Electricity factored into this only at a later step, so it was not truly necessary to kick-start the first era. There are in fact many industrial revolutions (we are currently in our fourth), and I would like to note that they often coordinate with renaissances, staggered as they are around the world.
A federal monarchy, in this instance, would function as a sort of microcosm for global patterns of dark age → renaissance → technological revolution. You mention that the South is the cultural capital of your society, and seems to house much in terms of institutional knowledge. This area, then, would be the nexus point of your society’s economy. Regardless of any isolationism, centralizing the innovative parts of one’s society makes it especially vulnerable to collapse in the case of, say, a civil war.
The Athenian coup of 411 BC during the end of the Peloponnesian War is a good example of this. While the Athenians briefly enjoyed a restoration of their government (Wikipedia), it was dissolved a short while later (Wikipedia). Athens remained a cultural center, possibly by habit, though it had lost what remnants of bureaucratic authority and autonomy it once had.
If all knowledge of technological innovation is cloistered within one area, and that area is besieged by a war - civil or not, then it will definitely set your society back by however many ages of knowledge lost. Fortunately, many real-word societies were sensible enough to create and distribute copies for the sake of preserving cultural heritage, functioning also as an excellent PR move to laud the benefits of their society for the sake of both conquest and trade.
Provided that neither city (like Roanoke Colony and the Land of Punt) nor continent become lost, some knowledge will always be retained and utilized if necessary. Technological revolutions impart the advantage of freeing up human capital from menial labor that would otherwise take up a person’s time and ability to contribute to society in a meaningful way, which is economically beneficial.
I am not very certain how isolationism would factor into this, but it would seem that maintaining some degree of separation in multi- and international politics would be advantageous in advancing one’s society in the pursuit of knowledge. However, should others find rumors of this advancement, your society may eventually be approached by potential allies or enemies.
If there is a civil war occurring while your society is approached by outside influences, please be aware that these others may take advantage of this instability, especially if there are natural resources compounding the value of your society’s human capital. There is also the issue of relative stratification of different ethnic groups, since you had mentioned it - if the schism is along ethnic lines, then there’s potential for others to ally themselves along those lines, exacerbating the conflict. Either way, there is likely to be moral grounds upon which the conflict is centered, and interference from outside groups has the potential to heighten and polarize tensions.
To quote the play Agamemnon by Aeschylus: “In war, the first casualty is truth.” No war is truly without a morality, which may quickly become propaganda, in order to sway the participating peoples. The importance you ostensibly give the South as a center of culture in the same question of dark ages and civil wars and technological revolutions lends to me the idea that something happens with it in your story. Are its relations with its neighbors healthy, or strained? Why so? Is it - or someone else - pushing for advancement at the expense of cultural mores?
J.R.R. Tolkien’s novels are often accredited the message of anti-industrialism (Wikipedia, Medium, Tolkien Gateway - Oxford section) and the perpetuation of the Merry England ideology. This is something your society might see during cultural upheaval, particularly if it’s caused by innovation that at first seems to run counter to their culture. The arts might take on a facet of this conflict, particularly in literature under the form of social and proletarian literature. Le roman à thèse is a particular subgenre of this literature, one which focuses on persuading the reader to agree with the author’s argument. Factory girl literature maintains similar themes, critiquing industrialization primarily from the perspective of Asian authors utilizing the factory girl archetype.
In terms of literature recommendations, I have two more to offer you: A Dream of Arcadia: Anti-Industrialism in Spanish Literature, 1895–1905 by Lily Litvak (ISBN: 978-0292741300) and
The Lives of Machines: The Industrial Imaginary in Victorian Literature and Culture by Tamara Ketabgian. Both detail the reactions of a culture during and after an industrial revolution, and what type of counter-culture arises from it.
I think it is possible for an isolated society to have an industrial revolution, but it would likely be on a smaller scale than a non-isolated society, and their efforts might more be on rehabilitation of their environment and encouraging a more profound and beneficial relationship with nature, rather than chasing the next generation of shiny objects and possible advancements in warfare.
A federal monarchy may present some unique issues with this setting, because of its comparatively rigid bureaucratic hierarchy and propensity to shatter along federation lines - which is where I think conflict is most likely to lie in this situation - and your society may not regroup under the same parameters. Most likely, all of this would follow a previous industrial revolution, à la the Progressive Era of the US/La Belle Époque of Europe.
Your worldbuilding seems to indicate a circumvention of New Imperialism and jingoistic foreign policy, so unless somebody comes knocking at your society’s door, this industrial revolution and civil war you have mentioned might be very toned down in terms of potential impact. A parallel of the Long Depression is still feasible, and might be a major contributing factor in your society’s civil war.
30 notes
·
View notes
Text
Lupine Publishers| Acute Viral Myocarditis: The Role of Speckle Tracking Echocardiography

Lupine Publishers | Journal of Cardiology Research
Abstract
Although, an accurate initial diagnosis of acute myocarditis can be challenging to make, all patients with suspected acute myocarditis need a thorough diagnostic work-up and undergo a full imaging evaluation from the very outset to rule out other condition with a similar clinical picture. An acute myocarditis and acute myocardial infarction may musgarate each other, as a result a thorough evaluation is warranted to be certain of the diagnosis, and these patients should be managed in a high cardiac care setting. The diagnosis of an acute myocarditis is based on symptoms, elevated markers of myocardial necrosis, electrocardiographic and imaging modalities including echocardiographic changes. A broad range of diagnostic tools are warranted during the initial diagnostic work-up, however a full echocardiographic evaluation should form an integral imaging modality to rule out other cardiac and systemic disorders, to properly evaluate ventricular function and presence of pericardial collections; and to further guide management.
Keywords: Acute viral myocarditis; Speckle tracking echocardiography
Abbreviations: AVM: Acute Viral Myocarditis; LV: Left Ventricle; RV: Right Ventricle; STE: Speckle Tracking Echocardiography
Introduction
Although, an accurate diagnosis of acute myocarditis could be challenging to make, all patients with suspected acute myocarditis should have a thorough diagnostic work-up and undergo a full imaging evaluation to rule out other causes with a similar clinical picture. An acute myocarditis and acute myocardial infarction may musgarate each other, as a result a thorough evaluation is warranted to be certain of the diagnosis, and these patients should be managed in a high cardiac care setting [1-5]. The diagnosis of acute myocarditis is based on symptoms, elevated cardiac biomarkers suggesting myocardial necrosis, electrocardiographic and echocardiographic changes [6]. In addition to the broad range of diagnostic tools warranted during the initial diagnostic work-up, a full echocardiographic evaluation should form an integral imaging modality to rule out other musgarating cardiac and systemic disorders, to properly evaluate ventricular function and presence of pericardial collections; and to further guide management. Although standard echocardiographic modalities form an integral part of a routine daily practice, speckle tracking echocardiography is a new advanced echocardiographic imaging modality useful for detection of subclinical myocardial and ventricular dysfunction [7]. The main aim of this paper is to review and highlight the role of speckle tracking echocardiographic modalities to facilitate a thorough evaluation of any patient suspected of an acute myocarditis.
Speckle tracking echocardiography
Speckle tracking echocardiography (STE) is a new echocardiographic technology with high sensitivity and reproducibility for detection of subclinical ventricular dysfunction [7]. The invention of advanced STE strain and strain rate indices which are new echocardiographic parameters of the modern era, are useful to evaluate the intrinsic cardiac deformation and should be implemented on daily clinical practice. The STE indices provide an accurate assessment of regional and global ventricular contractility enhanced by angle independency and less pitfalls throughout plane motion compared with convectional 2D-echocardiographic parameters [8]. Based on recent reports, STE should be recommended on daily clinical practice when evaluating any cardiac conditions, including inflammatory cardiomyopathies [8].
Basic principle of speckle tracking echocardiography
The STE is a rapidly growing technique and an important component of routine clinical practice in recent years. Reports have demonstrated the superiority of STE over tissue Doppler imaging based on various aspects of its deformation imaging [8,9]. Moreover, STE is an easy method to apply, that provides more objective data on myocardial mechanics and reflects the regional and global ventricular functions in a superior way in terms of diagnosis and prognosis. The left ventricle (LV) performs longitudinal shortening-lengthening movements around the long axis, thickening and thinning movements around transverse axis, as well as both thickening-thinning (radial axis) and lengtheningshortening (circumferential axis) movements that occur in the short axis throughout the cardiac cycle [10,11]. As the heart contracts, its myocardial fibers shorten longitudinally and thicken transversely. The right ventricle (RV performs longitudinal shorteninglengthening movements around the long axis which is the important and common deformation parameters adopted on current clinical practice. The frequently used ventricular deformation parameters include the strain, which represent the shortening-thickening in relation to the baseline length proportionally, and the strain rate, which demonstrates this relationship happening within a timeline [10,11]. Both deformation parameters are used to evaluated both ventricles and can be performed on both atria. However, the focus of this current review is on both ventricular myocardial mechanics, particularly the LV.
Reported data using speckle tracking echocardiography in myocarditis
Luis et al. [12], reported a case of a seventeen years old male with acute viral myocarditis, with significantly impaired LV longitudinal, circumferential and radial strain values demonstrated on STE evaluation [12]. In addition, the authors demonstrated a significant attenuation of the inferior, inferolateral and apical segments; with the inferolateral segmenting demonstrating a paradoxical circumferential strain [12].
A recent larger study of twenty-eight consecutive patients with a CMR-verified an acute myocarditis based on the Lake Louise criteria, Løgstrup et al. [13] demonstrated that STE was a useful echocardiographic modality during the initial diagnostic work-up; as the global longitudinal strain added supportive information to clinical and convention echocardiography [13,14]. Furthermore, the authors highlighted that global longitudinal systolic myocardial strain (including both the epicardial and endocardial strain) correlated strongly with the degree of myocardial edema and STE was found useful for diagnosis and evaluation for the degree of myocardial dysfunction [13,14]. The current technological developments in cardiology and cardiovascular imaging have brought new dimensions in diagnosing acute myocarditis and its sequel. Importantly, in the recent past, few reports have also recommended that STE, characterized by the precise evaluation of regional contractility, should be used as an adjunctive tool for confirmatory purposes for the diagnosis of acute myocarditis and inflammatory cardiomyopathy [15,16].
Larger studies
Retrospectively, Hsiao [15] studied a total of 128 cohorts [45 patients with suspected acute myocarditis and 83 healthy controls (mean age 39 years, both groups)], and all underwent a 2-dimensional STE. The study evaluated circumferential and longitudinal strain and strain rate as main study outcome parameters, both as prognostic and diagnostic markers. The study results demonstrated lower circumferential strain, circumferential strain rate, longitudinal strain, and longitudinal strain rate in patients with acute myocarditis patients; where both LV strain and strain rate were good diagnostic and prognostic tools, and these discriminatory features were also demonstrated in those with normal LV ejection fraction [15]. Furthermore, STE was also useful in predicting deterioration and overall event-free survival. In agreement with old case reports, Hsiao et al. [15] also demonstrated that global longitudinal systolic myocardial strain correlated strongly with the degree of myocardial edema. Although there are convincing reports regarding the role of STE on LV, data on RV strain parameters including their prognostic values are still limited.
Conclusion
Acute myocarditis might musgarate other cardiac conditions, including an acute myocardial infarction and as a result a result an extensive evaluation is warranted at first hand to make an accurate diagnosis and guide patient’s management accordingly. Echocardiography, particularly STE, is an important and advanced modality to evaluate subclinical myocardial mechanical dysfunction. The STE is a useful echocardiographic tool in acute myocarditis, as acute myocarditis is associated with impaired LV and RV strain parameters. The STE parameters in patients with acute myocarditis are useful, as LV strain and strain rate useful parameters for both diagnostic and prognostic purposes, even in patients with normal baseline LV ejection fraction based on standard transthoracic echocardiography. Furthermore, STE is also useful in predicting ventricular deterioration and event-free survival in patients with acute myocarditis. Although, there is a growing number of case studies, series and even larger data on LV function using speckle tracking strain echocardiographic parameters; data on the importance of RV function are still warranted.
For more Lupine Publishers Open Access Journals Please visit our website: https://lupinepublishersgroup.com/
For more Journal of Cardiology Research articles Please Click Here: https://www.lupinepublishers.com/cardiology-journal/
To Know more Open Access Publishers Click on Lupine Publishers
13 notes
·
View notes
Text
Different Industrial Revolutions.
Right now, we are going through the fourth industrial revolution or Industry 4.0. But, the era of technology might not have the same shape and size as of today but for their time, indeed, it was something for people to look at. Although the industrial revolution is considered a single ongoing event that started in the late 18th century, it can be better understood as four industrial revolutions. Each revolution was built upon the innovations of the prior revolution and led to more advanced forms of manufacturing processes.
The first Industrial Revolution: The first industrial revolution started in the late 18th and early 19th centuries when the biggest changes came in industries in the form of benefits of mechanization wherein for the first time in history. Some animal or even human labor could be substituted by mechanical labor due to mechanization. Agriculture started to be replaced by industry as the backbone of the societal economy.
The second Industrial Revolution: Almost a century later the second industrial revolution started in the late 19th and early 20th century which relied on the application of the principle of mass production along assembly lines, which was able to scale up manufacturing output with higher coordination between labor, tasks, processes, and machines. To this day, the Second Industrial Revolution is considered the most important one as the inventions of the automobile and the aircraft began in the 20th century.
The third Industrial Revolution: The third Industrial Revolution began in the second half of the 20th century, wherein we see the emergence of yet another source of untapped source of energy which is the Nuclear energy! Also third industrial revolution brought forth the rise of electronics, telecommunications and, the computers. It opened the doors to research, space expeditions, and biotechnology through the new technologies using machines that are able to repeat a series of tasks under a relatively well-defined parameter and require minimal supervision. It also relied on the development of information technologies, initially with the digital computer, and then with information technologies by the end of the 20th century. This led to the formation of the internet.
The fourth Industrial Revolution: The 4 Industrial Revolution is taking shape the worldwide economies are based on them. The manufacturing processes, the scale and scope of the output, and the customization of the products. Machines are therefore getting similar to the flexibility of human labor. They involve more than simple and repetitive tasks. Therefore the prime focus shifts to global value chains, which are a circular process of gathering resources and transforming them into parts, and products and distributing the finished goods to the markets, and finally again making these resources available through various recycling and reuse strategies.
Solulever, a Dutch Technology Startup, is based on the principles of Industry 4.0 and delivers top industrial connectivity platforms to help manufacturers take up the digital transformation of their plant. Brabo Edge Platform is a platform that allows seamless connectivity to different tools and equipment on the shop floor. It performs data mashups that are therefore available to the development teams on a real-time basis.
0 notes
Text
AN INTRO TO GENERATIVE ART AS NFT
In the rapidly growing and changing NFT ecosystem, there are numerous trends and developments taking place simultaneously. One of the major growing trends in the NFT sphere is that of generative art as NFTs. For those not involved in the minting and trading of NFTs, they might be befuddled by the whole generative aspect of these NFTs.
Before we get into how the world of generative art and NFT technology collides, let us first understand….
What is generative art?
Generative art is a fascinating concept as it merges the spontaneous and impulsive beauty of creating art with the unwavering stability and consistency of automation, computer generated or otherwise. The term itself, “Generative Art”, is descriptive enough to explain that it defines all artworks that have been produced using a generative system involving an autonomous system which when set off, results in the creation of art.
Now by using words like “automated” or “system”, one might believe that generative art is some sort of a recent invention but surprisingly, it is an art movement whose origins can be traced back to 70,000 years ago to stone carvings that were created using a grid system. This art movement entered the cultural zeitgeist along the same time as the modern commercial computer emerged in the 1960s, with generative artists experimenting with computer programs to discover a meeting point between art and computer science. The movement hit its stride in the 1990s as computer programs were created specifically to code visual arts, such as Design by Numbers, and computer based art became more popular. With the breakthrough of artificial intelligence, the Generative Art Movement hit a major evolutionary milestone as AI-produced art became a possibility within the “computer art” medium and soon became immensely popular not only on the Internet, but in the global art market as well.
There are numerous examples of generative artworks that have found mainstream popularity, particularly works from Georg Nees, Vera Molnar and Jared Tarbell, all exemplifying the primary principle of the movement which is to “use technology to create art that could have only existed in our imagination.”
As NFTs continue to push and expand the boundaries of digital art and the overall global art market, we also have to talk about…
Generative NFTs
With the ever-evolving AI technology, the creation of generative art has become a sophisticated process which is widely accessible to the masses. Today, it is quite the common practice to produce digital generative art with the help of computer-run algorithms, therefore, this art movement’s merge with the NFT universe should not come as a surprise.
With the advent of the blockchain technology, the creation of generative art is now possible through smart contracts.
How does this work?
To put it simply, smart contract is a code that is stored on a certain address within the blockchain, mainly Ethereum. So, when one transfers ETH (the cryptocurrency for Ethereum) to the address, the smart contract is activated and the code within the contract is automatically executed. This has been adapted to the creation process of generative art which results in an artwork being created and stored on-chain as an NFT, whenever crypto is sent to run the smart contract.
With generative art NFTs, there is an added level of uniqueness and novelty that was not possible in the art market before. The process of creating such NFTs has been further refined by including various input parameters such as wallet address, gas price or transaction ID which ensures that even if the art pieces produced by the generative process are similar looking, each piece will always remain unique because of these parameters. Additionally, by implementing supply caps, the rarity and value of generative NFT art is further protected by preventing the algorithms from further producing countless similar artworks within the generative series that can result in its devaluation.
Currently, there are numerous successful examples of generative art NFTs such as:
Art Blocks, which is possibly one of the most successful NFT projects with a sales volume of more than $990 million dollars. The concept behind the whole operation is exhilaratingly simple: based on the Ethereum blockchain, the platform provides you with on-demand, unique generative art as the user picks an artistic style, pays for the work and then mints the NFT without having any advance knowledge of what the artwork will look like beforehand.
Autoglyphs, which is a pioneer in generative art NFTs as it was the first of its kind. Started by the creators of CryptoKitties, LarvaLabs, this project is also on the Ethereum blockchain and has had a total sales volume of over $41 million. With the supply being limited to 512 NFTs, the artwork in this generative series have already been created and sold with the pieces now only available for purchase in the secondary market.
I hope that my very own soon-to-be-launched generative NFT art series, Ethernal Gates, will be added to this list in the near future. This project is a collaboration with the brilliant people of Arts DAO and is an absolute labour of love. I will be talking about Ethernal Gates in greater detail in the blog next week, so hold on tight!
On that note, you have just completed a quick crash course on generative art and its recent marriage with NFT technology. There is still so much being developed in this space and it feels like we have just begun to scratch the surface. So, keep checking in and I will keep you posted from the frontlines of the exciting world of NFTs…
Shop my artwork collection HERE.
Check my digital artwork collection HERE.
0 notes
Text
HW8
ANS TO QUESTION 1(a)
Example: Airplane
Want to improve parameter “speed”
Conflict will be “loss of energy”
There are four solutions.
Principle 14: spheroidality-curvature
We can round the edges of wingtips!
Principle 20: continuity of useful action.
All engines should work at same rpm.
Principle 35: parameter changes.
If we can change the degree of flexibility of wings , I think that should too solve the problem.
Example: Mobile phone
Want to improve parameter “ productivity”
Conflict “ease of operation”
Principle 1: segmentation.
Can be divided into hundred of parts. This links the improving and conflicting feature.
Principle 28: mechanics substitution.
Replace the mechanical system with a sensory one. I think, Automation is the key here.
Principle 7: Nested Doll
Place one object inside another.
I think, this means all the production machineries should be aligned, so that production process could be much faster!
Principle 10: Preliminary action.
Perform the required change of an object in advance.
all machineries have their own functions, they should be placed in such a way so that no action is required after the production line gone into action.
ANS TO QUESTION 1(b) :
Drones, the very first thing that comes to our mind when we talk about drones is the power of destruction of human lives, property in wars. We already have examples of its destruction. The Nagorno-Karabakh war between Armenia -Azerbaijan, Ukraine-Russia clash where both parties using drones at intensive level.
But now we can use drones for a completely different purpose. We can use drones for the betterment of humankind. For example, we can use drones in agriculture. Farmers can use agricultural drones adorned with cameras to improve the treatment of their crops. The drones can allow farmers a unique perspective that previously used satellite imagery could not provide. They help to expose issues with irrigation treatment, soil variation, and distressed plants at a much lower cost than methods like crop imaging with a manned aircraft. The success of the drones can be made possible by technological advances in GPS modules, digital radios, and small MEMS sensors. Together, these advances allow farmers to bring greater precision to their craft in order to reap greater rewards.
ANS TO QUESTION 2:
A brief history of Automation
Prehistory
The first automation tool that was used by stone age people is weighted fishing nets. This tool had been discovered in South Korea recently. The first automation tool was all about increase the productivity. This is actually the earliest known example of fishing technology.

Fig: stone sinkers
Ancient History
Water and Shadow clocks were used for the automation of timekeeping as early as 1500 BC. The oldest known example is dated to 1500 BC, and is from the tomb of the Egyptian pharaoh Amenhotep. There were actually two types of water clocks, one of them is outflow ,another one is inflow. By measuring the change in water level of a container ,a viewer can tell time. An inflow water clock actually follows the same concept.
The Early Common Era
The Book of Ingenious Devices, published in 850 by the Banu Musa brothers in Iraq, describes numerous automated technologies developed early in the Common Era. These included automatic fountains, an automatic flute player, and various regulators and valves that were important precursors to control and fail-safe systems we use today.
The Renaissance
The Renaissance is famous for the advances in art and philosophy, but it was also a time of exciting scientific and technological developments. In 1439, a German craftsman and inventor Johannes Gutenberg invented a movable type printing press by which mass amount of books could be printed . In 1642,Blaise Pascal invented a digital arithmetic machine ,modern calculators are direct descendants of it.
The First Industrial Revolution
In the late 1700s, the Industrial Revolution reached new heights, with bursting technological advancements . James Watt’s rotary-motion steam engine was finished in 1776. Edmund Cartwright’s power loom, designed in 1784, patented it 1785 which used water power to increase the weaving process. In 1786 another scottish inventor /mechanical engineer Andrew Meikle invented the threshing machine to remove the outer husks from grains of wheat , before the invention it was very laborious task ,required a large number of workers.
The Second Industrial Revolution
The second industrial revolution reached its peak between 1870-1914. Automatic signals, air brakes, electric light, and typewriter were introduced in 1870. The elevator and structural steel for buildings, leading to the first skyscrapers ,were launched in 1880. Bell Telephone Company began to adopt automatic telephone switchboards in 1919.
World War II and the Post-War Period
Colossus, the world's first electronic computer, had a single purpose: to help decipher the Lorenz-encrypted (Tunny) messages between Hitler and his generals during World War II . During the time of second world war various automobile companies conducting various experimentation in the field of automated production. The world's first electronic desktop calculators The ANITA MARK 7 and the MARK 8 were launched simultaneously in October 1961.
The Third Industrial Revolution
Advancing computer technology enabled exponential technological progress in the following decades. In the early 1970s, the Third Industrial Revolution, or Digital Revolution, was in its infancy, supported by developments in magnetic memory, microprocessors, and battery technology that allowed computer systems to be further refined. The availability of personal computers increased dramatically in the late 1970s and early 1980s, and this put us squarely into the Information Age that we are still in today.
The Fourth Industrial Revolution
Artificial Intelligence (AI) is one of the key technologies that transforming economy, society and the labour market. With the evolution of robotics and cobots, which is being designed specially to interact with humans in a collaborative environment will increase the production of a industry at a significant level. They can also save employees from doing dangerous and unhealthy task. Augmented reality and virtual reality, technologies that will merge the real world and the digital world using computer science. 3D and 4D printing technology is becoming increasingly important in design, architecture, engineering, etc.

Edmund cartwright’s power loom

Andrew Meikle’s threshing machine
https://hakaimagazine.com/news/possible-evidence-of-worlds-oldest-fishing-nets-unearthed-in-korea/
https://www.ancient-origins.net/ancient-technology/ancient-invention-water-clock-001818
https://www.atlasobscura.com/articles/book-of-ingenious-devices
https://www.britannica.com/technology/calculator
https://www.bbc.co.uk/history/historic_figures/cartwright_edmund.shtml
http://scihi.org/andrew-meikle-threshing-machine/
ANS TO QUESTION 3:
“commuting by car has been found to be a debilitating, continuous drain on our happiness and quality of life”
Completely agree with this.
The time we waste on road,in traffic jams that actually impacts our life,our mental health condition. We can easily divert this whole time to our family, to the things we actually enjoy to do.
“Collaboration over great distances is now often possible without travel” that’s actually true.
According to a survey by Global Workplace Analytics,25-30% of the workforce have been working remotely from 2020. All around the world, more and more employers are embracing flexible schedules for their remote teams leading to new remote work trends and remote work options.
“The World Wide Web is making us all part of a giant global digital village, enabling efficiencies beyond what merely living nearby would or even could provide”
It connected the world in a way that was not possible before and made it much easier for people to get information, share and communicate. It allowed people to share their work and thoughts through social networking sites, blogs and video sharing. The world wide web made it much easier for people to share information.
Actually, got to know what kind of jobs we can do remotely, and which we cannot. How can we be a live-anywhere, work-anywhere-else society, the author explains it very elaborately, also I agree with the fact how this modern civilization depends on centralized foods and energy production.
0 notes