#Algorithmic Decisionism
Explore tagged Tumblr posts
Text
popatrz algorytmie, wychodzę ci naprzeciw, karm się moimi reakcjami, szerami, moimi bańkami, moimi afektami, wciąż jest język który ja też mogę podpatrywać u ciebie.
#postprawda#PostTruth#Algorithmic Culture#Patrick Leftwich#Digital Philosophy#Critical Theory#Algorithmic Subjectivity#Techno Philosophy#Speculative Realism#Deleuze And Guattari#EFlux#Cybernetics#Live Coding#Post Humanism#Decentralization#Machine Ecology#Recursive Colonialism#Cognitive Capitalism#Aesthetics After Finitude#Algorithmic Imagination#Speculative Aesthetics#Techno Critique#ArtificialIntelligence#Inhuman Labor#Algorithmic Decisionism#Future Sound#Data Futures#AI#moje przemyślenia
2 notes
·
View notes
Text
gamificación
22. Videojuegos y gamificación en el arte
Justificación de la actualidad y relevancia del tema
El impacto de los videojuegos en la cultura contemporánea ha sido innegable, generando nuevas narrativas y experiencias inmersivas que han influido en el arte contemporáneo. La gamificación, entendida como la aplicación de mecánicas de juego en contextos no lúdicos, ha transformado la manera en que interactuamos con el arte, desdibujando los límites entre el espectador y el participante activo.
El uso de videojuegos y elementos interactivos en el arte permite una reconfiguración de la experiencia artística, incorporando la toma de decisiones, la simulación y la exploración de mundos virtuales. Estas estrategias generan nuevas formas de participación y crítica social, abordando cuestiones de control, identidad, violencia y simulacro.
Texto crítico basado en el marco teórico y conceptual
Autores como Alexander Galloway (2006) en Gaming: Essays on Algorithmic Culture han analizado los videojuegos como sistemas culturales que configuran nuevas formas de subjetividad y agencia. Por otro lado, Ian Bogost (2011) en How to Do Things with Videogames plantea que los videojuegos no son solo entretenimiento, sino medios expresivos capaces de generar discurso crítico.
Cory Arcangel explora la cultura de los videojuegos en piezas como Super Mario Clouds (2002), donde hackea un cartucho de Super Mario Bros para dejar solo el cielo pixelado, evidenciando la estética y la nostalgia del medio digital.
Jodi (Joan Heemskerk y Dirk Paesmans) deconstruye la lógica de los videojuegos en obras como My%Desktop (2002), donde alteran la interfaz de sistemas operativos para convertirlos en experiencias visualmente caóticas, desafiando la funcionalidad y la lógica del diseño digital.
Eddo Stern aborda la relación entre videojuegos, violencia y propaganda en obras como Darkgame (2010), donde los jugadores experimentan el juego a través de estímulos sensoriales incompletos, reflejando la manipulación en entornos digitales y militares.
Molleindustria, un colectivo de artistas y programadores, crea videojuegos críticos como Phone Story (2011), que denuncia la explotación laboral en la industria tecnológica al obligar al jugador a realizar tareas simbólicas relacionadas con el trabajo infantil y la extracción de minerales.
Angela Washko cuestiona la representación de género en los videojuegos con proyectos como The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft (2012), en el que interviene dentro de WoW para generar debates sobre el sexismo en los espacios virtuales.
Kent Sheely explora la gamificación del arte en proyectos como Kill Screen (2013), donde captura momentos de glitch y errores en videojuegos de disparos, señalando la estetización de la violencia y la fragmentación de la experiencia digital.
Conclusión
El uso de videojuegos y gamificación en el arte contemporáneo ha abierto nuevas posibilidades de interacción y crítica, explorando temas como el control, la violencia, la identidad digital y la memoria. Artistas como Cory Arcangel, Jodi, Eddo Stern, Molleindustria, Angela Washko y Kent Sheely han utilizado los videojuegos como herramienta artística para cuestionar la cultura digital y generar experiencias participativas. En un mundo donde la ludificación se ha expandido más allá del entretenimiento, el arte gamificado ofrece una plataforma para repensar las estructuras del poder, la inmersión y la subjetividad en el siglo XXI.
0 notes
Text
What if… Information Processing as Hyperobject
Capitalism is not a human invention, but a viral contagion, replicated cyberpositively across post-human space. Self-designing processes are anastrophic and convergent: doing things before they make sense. Time goes weird in tactile self-organizing space: the future is not an idea but a sensation. – Sadie Plant and Nick Land
HYPERORGANISMS AND ZOMBIE SOCIETY
As I was reading R. Scott Bakker’s blog this morning, he had an interesting post The Zombie Enlightenment . In it he mentioned the notion of “…post-Medieval European society as a kind of information processing system, a zombie society”. Like many things this set my mind on hyperdrive. I was reminded of my recent reading of Timothy Morton’s interesting work Hyperobjects: Philosophy and Ecology after the End of the World where he describes a hyperobject:
the term hyperobjects to refer to things that are massively distributed in time and space relative to humans. A hyperobject could be a black hole. A hyperobject could be the Lago Agrio oil field in Ecuador, or the Florida Everglades. A hyperobject could be the biosphere, or the Solar System. A hyperobject could be the sum total of all the nuclear materials on Earth; or just the plutonium, or the uranium. A hyperobject could be the very long-lasting product of direct human manufacture, such as Styrofoam or plastic bags, or the sum of all the whirring machinery of capitalism. Hyperobjects, then, are “hyper” in relation to some other entity, whether they are directly manufactured by humans or not.1
Morton’s “the sum of all the whirring machinery of capitalism” brought to mind Nick Land’s adaptation of Deleuze and Guattari’s accelerating capital as a informational entity that is auto-organizing energy, matter, and information toward a technological Singularity (i.e., “There’s only really been one question, to be honest, that has guided everything I’ve been interested in for the last twenty years, which is: the teleological identity of capitalism and artificial intelligence” – here). We’ve seen how the debt system in D&G is part of an algorithmic memory or processing system to mark and channel desire or flows of energy-matter: here and here (i.e., “Society is not exchangist, the socious is inscriptive: not exchanging but marking bodies, which are part of the earth. We have seen that the regime of debt is the unit of alliance, and alliance is representation itself. It is alliance that codes the flows of desire and that, by means of debt, creates for man a memory of words (paroles).” and: “Man must constitute himself through repression of the intense germinal influx, the great biocosmic memory that threatens to deluge every attempt at collectivity.”). Of course they spoke in anthropological terms that seem quaint now in our computational jargon age which brings me to Ceasr Hidalgo.
We build against sadism. We build to experience the joy of its every fleeting defeat. Hoping for more joy, for longer, each time, longer and stronger; until, perhaps, we hope, for yet more; and you can’t say it won’t ever happen, that the ground won’t shift, that it won’t one day be the sadisms that are embattled, the sadisms that are fleeting, on a new substratum of something else, newly foundational, that the sadisms won’t diminish or be defeated, that those for whom they are machinery of rule won’t be done. …..– China Miéville, On Social Sadism
EMERGENCE, SOLIDITY, AND COMPUTATION: CAPITAL AS HYPERORGANISM
In Cesar Hidalgo’s Why Information Grows: The Evolution of Order, from Atoms to Economies where he describes the basic physical mechanisms that contribute to the growth of information. These include three important concepts: the spontaneous emergence of information in out-of-equilibrium systems (the whirlpool example), the accumulation of information in solids (such as proteins and DNA), and the ability of matter to compute.2
Explicating this he tells us that the first idea connects information with energy, since information emerges naturally in out-of-equilibrium systems. These are systems of many particles characterized by substantial flows of energy. Energy flows allow matter to self-organize. (Hidalgo, KL 2448) The second idea is that the mystery of the growth of information is that solids are essential for information to endure. Yet not just any solid can carry information. To carry information, solids need to be rich in structure.(Hidalgo, KL 2465) And, finally, energy is needed for information to emerge, and solids are needed for information to endure. But for the growth of information to explode, we need one more ingredient: the ability of matter to compute (i.e., the final step is intelligence and auto-awareness, decisional and ecological). (Hidalgo, KL 2475) As he remarks:
The fact that matter can compute is one of the most amazing facts of the universe. Think about it: if matter could not compute, there would be no life. Bacteria, plants, and you and I are all, technically, computers. Our cells are constantly processing information in ways that we poorly understand. As we saw earlier, the ability of matter to compute is a precondition for life to emerge. It also signifies an important point of departure in our universe’s ability to beget information. As matter learns to compute, it becomes selective about the information it accumulates and the structures it replicates. Ultimately, it is the computational capacities of matter that allow information to experience explosive growth.(Hidalgo, KL 2477-2482).
Of course Hidalgo like many current thinkers never asks the obvious questions of what’s behind this if anything, is there a telos to this IP initiative of the universe, is it all blind accident and process, a sort of accidental start-up algorithm in matter that suddenly began with the Big Bang; a part of the nature of things from the beginning? He describes self-organizing matter, its need for more permanent and enduring structures to support its processes, and then the emergence of computation or intelligence: “these objects allow us to form networks that embody an increasing amount of knowledge and knowhow, helping us increase our capacity to collectively process information” (Hidalgo, KL 2518).
I’ve never like the “self” in self-organizing – just seems too human, all too human a concept. Maybe auto-organizing should be its replacement. Either way what needs to be elided is the notion that there is some essential or core being behind the appearances directing this auto-organizing activity. It’s more a blind process having to do with the actual aspects of quantum and relativity theory in our universe rather than some notion of a personality behind things (i.e., God or Intelligence). When does matter become purposeful, attain a teleological goal oriented ability to organize itself and its environment? Is this what life is? Is life that threshold? Or something else? Many creatures alive do not need an awareness of auto-distancing from their environment to appear purposeful; and, or not. Think of those elder creatures of the oceans, the predators, the sharks, their drive to hunt, select, kill etc. Is this a telos, or just the organic mode of information as blind process working in an environment to satisfy the base requirements to endure?
We as humans seem to think we’re special, situated as the exception rather than the rule. But are we? No. What if we are like all other durable organic systems just the working out of blind processes and algorithms of information processing as it refines itself and emerges into greater and greater complexity? But this is to assume that “us” will remain human, that this teleological or non-teleological process ends with the human species. But does it? Or we but the transitional object of some further emergence, one that would be even more permanent, more adaptive to self-organizing matter, more enduring, more viable computationally oriented? I think you know where I’m going here: the machinic phylum, the emergence of AI, Robotics, Nanotech, ICT’s etc. that we see all around us, or these not the further immanent self-organization of matter into greater and more lasting forms that will eventually outpace the organic hosts that supported their emergence? Or we not seeing the edge of this precipice in such secular myths as posthumanism and transhumanism? The Technological Singularity as a more refined emergence of this self-organizing information processing entity or entities: this collective or hive, even distributed intelligence emerging in such external devices?
Hidalgo mentions the personbyte theory which suggests a relationship between the complexity of an economic activity and the size of the social and professional network needed to execute it. Activities that require more personbytes of knowledge and knowhow need to be executed by larger networks. This relationship helps explain the structure and evolution of our planet’s industrial structures. The personbyte theory implies that (1) simpler economic activities will be more ubiquitous, (2) that diversified economies will be the only ones capable of executing complex economic activities, (3) that countries will diversify toward related products, and (4) that over the long run a region’s level of income will approach the complexity of its economy, which we can approximate by looking at the mix of products produced and exported by a region, since products inform us about the presence of knowledge and knowhow in a region. (Hidalgo, KL 2524-2530).
In this sense capitalism is an informational entity or hyperobject, a self-organizing structure for energy, matter, and information to further its own emergence through temporal computational algorithms. As Hidalgo reiterates this dance of information and computation is powered by the flow of energy, the existence of solids, and the computational abilities of matter. The flow of energy drives self-organization, but it also fuels the ability of matter to compute. Solids, on the other hand, from proteins to buildings, help order endure. Solids minimize the need for energy to produce order and shield information from the steady march of entropy. Yet the queen of the ball is the emergence of collective forms of computation, which are ubiquitous in our planet. Our cells are networks of proteins, which form organelles and signaling pathways that help them decide when to divide, differentiate, and even die. Our society is also a collective computer, which is augmented by the products we produce to compute new forms of information. (Hidalgo, KL 2532-2537).
CROSSING THE RUBICON?
Yet, is the organic base the most efficient? Are we not already dreaming of more permanent structures, more enduring and durable robotics, machinic, etc.? Hidalgo is hopeful for collective humanity, but is this necessarily so? It looks more like we are but a form of matter that might have been useful up to this point, but that is becoming more and more apparent as obsolete and limited for the further auto-organization of information in the future. What Kant termed finitude is this limiting factor for humans: the human condition. Are we seeing the power of matter, energy, and informational auto-organization about to make the leap from human to a more permanent form? A crossing of the Rubicon from which humanity may not as a species survive? Possibly even merging ourselves into more permanent structures to support information and intelligence in its need to escape the limits of planetary existence?
The questions we need to be raising now are such as: What happens to humans if machines gradually replace us on the job market? When, if ever, will machines outcompete humans at all intellectual tasks? What will happen afterward? Will there be a machine-intelligence explosion leaving us far behind, and if so, what, if any, role will we humans play after that?3 Max Tegmark* lists the usual ill-informed suspects on the blogosphere circuit that cannot and will not ever answer this:
Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists seem incapable of writing an AI article without a picture of a gun-toting robot.
“ It’s impossible”: As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.
“ It won’t happen in our lifetime”: We don’t know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at a recent conference put the odds above 50 percent, so we’d be foolish to dismiss the possibility as mere science fiction.
“ Machines can’t control humans”: Humans control tigers not because we’re stronger but because we’re smarter, so if we cede our position as the smartest on our planet, we might also cede control.
“ Machines don’t have goals”: Many AI systems are programmed to have goals and to attain them as effectively as possible.
“ AI isn’t intrinsically malevolent”: Correct— but its goals may one day clash with yours. Humans don’t generally hate ants, but if we wanted to build a hydroelectric dam and there was an anthill there, too bad for the ants.
“ Humans deserve to be replaced”: Ask any parent how they’d feel about your replacing their child by a machine and whether they’d like a say in the decision.
“ AI worriers don’t understand how computers work”: This claim was mentioned at the above-mentioned conference and the assembled AI researchers laughed hard. (Brockman, pp. 44-45)
Tegmark will – as Hidalgo did – speak of humans as information processing systems:
we humans discovered how to replicate some natural processes with machines that make our own wind, lightning, and horsepower. Gradually we realized that our bodies were also machines, and the discovery of nerve cells began blurring the borderline between body and mind. Then we started building machines that could outperform not only our muscles but our minds as well. So while discovering what we are, will we inevitably make ourselves obsolete? (Brockman, p. 46)
That’s the hard question at the moment. And, one still to be determined. Tegmark’s answer is that we need to think this through: “The advent of machines that truly think will be the most important event in human history. Whether it will be the best or worst thing ever to happen to humankind depends on how we prepare for it, and the time to start preparing is now. One doesn’t need to be a superintelligent AI to realize that running unprepared toward the biggest event in human history would be just plain stupid.” (Brockman, p. 46)
INVENTING A MODEL OF THE FUTURE? HYPERSTITIONAL ENERGETICS?
What would be interesting is to build an informational model, a software application that would model this process from beginning to now of the universe as an auto-organizing system of matter, energy, and information into the various niches of complexification as it stretches over the temporal dimensions as a hyperobject or superorganism. Watch it ins the details of a let’s say Braudelaian input of material economic and socio-cultural data of the emergence of capitalism as a hyperobject over time and its complexification up to this projected Singularity. Obviously one would use statistical and probabilistic formulas and mathematical algorithms to accomplish this with sample data, etc. Either way it would show a possible scenario of the paths forward of human and machinic systems as they converge/diverge in the coming years. I’ll assume those like the complexity theorists in New Mexico university have worked such approximations? I need to study this… someone like a Stuart Kauffmann? Such as this essay: here:
The universe is open in being partially lawless at the quantum-classical boundary (which may be reversible). As discussed, the universe is open upward in complexity indefinitely. Based on unprestatable Darwinian exaptations, the evolution of the biosphere, economy and culture seem beyond sufficient law, hence the universe is again open. The unstatable evolution of the biosphere opens up new Adjacent Possible adaptations. … It seems true both that the becoming of the universe is partially beyond sufficient natural law, and that opportunities arise and disappear and either ontologically, or epistemologically, or lawlessly, may or may not be taken, hence can change the history of our vast reaction system, perhaps change the chemistry in galactic giant cold molecular clouds, and change what happens in the evolution of the biosphere, economy and history.
Sounds familiar in the sense of Meillassoux’s attack on sufficient causation (i.e., ‘principle of sufficient reason’), etc. when Kauffman mentions “the evolution of the biosphere, economy and culture seem beyond sufficient law, hence the universe is again open”. Of course Kauffman’s thesis is: “a hypopopulated chemical reaction system on a vast reaction graph seems plausibly to exhibit, via quantum behavior and decoherence, the acausal emergence of actual molecules via acausal decoherence and the acausal emergence of new ontologically real adjacent possibles that alter what may happen next, and give rise to a rich unique history of actual molecules on a time scale of the life time of the universe or longer. The entire process may not be describable by a law.” In other words its outside “sufficient reason”.
In his The Blank Swan: The End of Probability Elie Ayache is like Land tempted to see Capitalism as a hyperobject or entity, saying, “What draws me to Deleuze is thus my intention of saying the market as univocal Being”.4 He goes on to say:
The problem with the market is that it is immanence incarnate. It has no predefined plane. Much as I had first intended derivatives and their pricing as my market and my surface, I soon found myself making a market of the writings of Meillassoux, Badiou and Deleuze. They became my milieu of immanence. The plane of immanence on which to throw my concept of the market soon became a plane of immanence on which to deterritorialize thought at large. I soon became tempted to remake philosophy with my concept of the market rather than remake the market with a new philosophy. The market became a general metaphor for writing, the very intuition of the virtual with which it was now possible to establish contact. I was on my way to absolute deterritorialization, and the question became how to possibly deliver this ‘result’ otherwise than in a book that was purely philosophical. (Ayache, pp. 303-304)
Of course he’s dealing with specifics of trading in the derivatives market, etc., but one can extrapolate to a larger nexus of possibilities. As he suggests: “I soon became tempted to remake philosophy with my concept of the market rather than remake the market with a new philosophy. The market became a general metaphor for writing, the very intuition of the virtual with which it was now possible to establish contact. I was on my way to absolute deterritorialization, and the question became how to possibly deliver this ‘result’ otherwise than in a book that was purely philosophical.” This notion of both capital and thought making a pact of absolute deterritorialization seems to align with Hildalgo’s history of information theory and its own auto-organizational operations.
Ayache will like Land see the market as a unified entity: The market, as market, is one reality. It cannot be separated or differentiated by external difference. It is an intensity: the intensity of the exchange, presumably. It follows no stochastic process, with known volatility or jump parameters. It is a smooth space, as Deleuze would say, not a striated space. (Ayache, p. 325)
As wells as an organism: What gets actualized and counter-actualized (i.e. differentiated) here is the whole probability distribution, the whole range of possibilities, and the process is the process of differentiation (or distinction, or emergence, literally birth) of that put. The market differentiates itself literally like an organism, by ‘growing’ that put (like an animal grows a tail or like birds grow wings) and by virtually growing all the successive puts that our trader will care to ask about. (Ayache, p. 338) In his book Hidalgo mentions a curious statement: “As of today, November 11, 2014, “why information grows” returns four hits on Google. The first one is the shell of an Amazon profile created for this book by my UK publisher. Two of the other hits are not a complete sentence, since the words are interrupted with punctuation. (By contrast, the phrase “why economies grow” returns more than twenty-six thousand hits.)”(Hidalgo, KL 2645) So that the notion of the market as an entity that grows informationally seems almost apparent to many at the moment.
Hidalgo will also mention the father of neoliberalism Friedrich Hayek who famously pointed this out in a 1945 paper (“ The Use of Knowledge in Society,” American Economic Review 35, no. 4 [1945]: 519– 530). There, Hayek identified money as an information revelation mechanism that helped uncover information regarding the availability and demand of goods in different parts of the economy. (Hidalgo, KL 3060) This notion of money as a “revelation mechanism” fits into current trends of Bitcoin as an virtual apparatus for informational mechanisms and market growth of Capital as a Hyperorganism.
THE VIRTUAL ECONOMY: BLOCKCHAIN TECHNOLOGY AND BITCOIN-ECONOMICS
Some say we are the Age of Cryptocurrency in which Bitcoin and Blockchain technology will move things into the virtual arena where energy, matter, and information are enabled to push forward this growth process in an ever accelerating manner. (see here) Part of what their terming the programmable economy. As Sue Troy explains it the programmable economy — a new economic system based on autonomic, algorithmic decisions made by robotic services, including those associated with the Internet of Things (IoT) — is opening the door to a range of technological innovation never before imagined. This new economy — and more specifically the concept of the blockchain and metacoin platforms that underpin it — promises to be useful in improving an astonishingly varied number of issues: from reducing forgery and corruption to simplifying supply chain transactions to even greatly minimizing spam. In her interview she states:
Valdes explained the technical foundations of the blockchain ledger and the programmable economy. He described the programmable economy as an evolution of the API economy, in which businesses use APIs to connect their internal systems with external systems, which improves the businesses’ ability to make money but is limited by the fact that the systems are basically siloed from one another. The Web was the next step in the evolution toward the programmable economy, he said, because it represents a “global platform for programmable content. It was decentralized; it was a common set of standards. Anyone can put up a Web server and plug into this global fabric for content and eventually commerce and community.”
The programmable economy, Valdes said, is enabled by “a global-scale distributed platform for value exchange. … The only thing that’s uncertain is what form it will take.” Valdes pointed to Bitcoin, which uses blockchain ledger technology, as a prominent example of a “global-scale, peer-to-peer, decentralized platform for global exchange.”
Ultimately Valdes states that the idea of programmability can be extended to the corporate structure, Valdes said. Today the rules of incorporation are fixed, and the corporation is represented by its employees and a board of directors. In the future, corporations could be “more granular, more dynamic and untethered from human control”.
Of course this fits into the notion that the future City States or Neocameral Empires will also become “more granular, more dynamic and untethered from human control” as machinic intelligence and other convergences of the NBIC technologies take over more and more from humans.
One want to take a step back and get one’s breath and say: “Whoa, there partner, just wait a minute!” But by the time we institute some ethical or governmental measures it will like most of history be far too late to stop or even slow down this juggernaut of growing informational hyperorganisms. As one advocated suggested there will come a time when everything is connected in an information environment: “You can put monitors in the anything to measure or quantify exchanges, the sensors are connected to smart contracts, the contracts are changing as the exchanges take place, so you have this dynamic process that’s taking place in the supply chain, constantly refreshing the economic conditions that surround it…” (see). In this information programmable economy as Troy sees it Organizations of the future will need a different organizational model, he said. “You see society changing in a sharing, collaborative environment. Think about it being the same internally.”
As one pundit Jacob Donnelly tells it Bicoin is in existential crisis, yet it has a bright future. What is increasingly likely is that the future of bitcoin is bright. It is the seventh year in the development of this network. It takes years to build out a protocol, which is what bitcoin is. As Joel Spolsky says, “Good software takes 10 years. Get used to it.”
“Bitcoin is comparable to the pre-web-browser 1992-era Internet. This is still the very early days of bitcoin’s life. The base layer protocol is now stable (TCP/IP). Now engineers are building the second layer (HTTP) that makes bitcoin usable for average people and machines,” Jeff Garzik, founder of Bloq and Core developer of bitcoin, told me.
Once the infrastructure is built, which still has many more years ahead of it, with companies like Bloq, BitGo, 21.co, and Coinbase leading the charge, we’ll begin to see solid programs built in the application layer.
But even while we wait for the infrastructure to be built, it’s clear that bitcoin is evolving. Bitcoin is not perfect. It has a lot of problems that it is going to have to overcome. But to label it dead or to call for it to be replaced by something new is naive and shortsighted. This battle in the civil war will end, likely with Bitcoin Classic rolling out a hard fork with significant consensus. New applications will be built that provide more use cases for different audiences. And ultimately, the Internet will get its first, true payment protocol.
But Bitcoin is seven years old. It will take many years for the infrastructure to be laid and for these applications to reach critical mass. Facebook had nearly 20 years after the browser was released to reach a billion users. To imagine bitcoin’s true potential, we need to think in decades, not in months or years. Fortunately, we’re well on our way.
FUTURE TECH: AUGMENTED IMMERSION AND POLICING INFORMATION
One imagines a day when every aspect of one’s environment internal/external, intrinsic/extrinsic is programmable and open to revision, updates, changes, exchanges, etc. in an ongoing informational economy that is so invisible and ubiquitous that even the machines will forget they are machines: only information growth will matter and its durability, expansion, and acceleration.
In an article by Nicole Laskowski she tells us augmented and virtual reality technologies may be better suited for the enterprise than the consumer market as these technologies become more viable. Google Glass, an AR technology, for example, raised ire over privacy concerns. But in the enterprise? Employees could apply augmented and virtual reality technology to build rapid virtual prototypes, test materials, and provide training for new employees — all of which can translate into productivity gains for the organization.
“The greatest level of adoption is around the idea of collaboration,” Soechtig said. Teams that aren’t in the same physical environment can enter a virtual environment to exchange information and ideas in a way that surpasses two-dimensional video conferencing or even Second Life Enterprise. Nelson Kunkel, design director at Deloitte Digital, described virtualized collaboration as an “empathetic experience,” and Soechtig said the technology can “take how we communicate, share ideas and concepts to a completely new level.”
For some companies, the new level is standard operating procedure. Ford Motor Company has been using virtual reality internally for years to mock up vehicle designs at the company’s Immersion Lab before production begins. Other companies, such as IKEA, are enabling an augmented reality experience for the customer. Using an IKEA catalogue and catalogue app, customers can add virtual furnishings to their bedrooms or kitchens, snap a photo and get a sense for what the items will look like in their homes. And companies such as Audi and Marriott are turning VR headsets over to customers to help them visually sift through their choices for vehicle customizations and virtually travel to other countries, respectively.
Vendors, too, see augmented and virtual reality as an opportunity — from Google and its yet-to-hit-the-market Google Glass: Enterprise Edition to Facebook and its virtual reality headset, Oculus Rift, to Microsoft and its HoloLens, which it describes as neither augmented nor virtual reality, but rather a “mixed reality that lets you enjoy your digital life while staying more connected to the world around you,” according to the website. All three companies have eyes on the enterprise.
NEOCAMERALISM OR GOVERNANCE OF INFORMATION
Is this techno-optimism or its opposite, utopia or dystopia… we’ll we even be there to find out? In his book The Disordered Police State: German Cameralism as Science and Practice on the old princedoms of the Cameral states of Germany Andre Wakefield comments:
The protagonist of my story is the Kammer, that ravenous fiscal juridical chamber that devoured everything in its path. History, I am told, is only as good as its sources, and the cameral sciences, which purported to speak publicly about the most secret affairs of the prince, were deeply dishonest. We cannot trust them. And because many of the most important cameral sciences were natural sciences, the dishonesty of the Kammer has been inscribed into the literature of science and technology as well. There is no avoiding it.5
The German cameralists were the writer-administrators and academics who had provided a blueprint for governance in early modern Germany. Much like our current systems of academic and Think Tank experts who provide the base blueprints for governance around the world today.
When we read many of the books about our future it is spawned in part and funded by such systems of experts, academics, and governmental or corporate powers seeking to convince, manipulate, and guide in the very construction of a future tending toward their goals and agendas. A sort of policing of culture, a policy is a policing and movement of the informational context to support these entities and organizations.
In the future we will indeed program many capabilities that closely resemble those arising from ‘true’ intelligence into the large-scale, web-based systems that are likely to increasingly permeate our societies: search engines, social platforms, smart energy grids, self-driving cars, as well as a myriad other practical applications. All of these will increasingly share many features of our own intelligence, even if lacking a few ‘secret sauces’ that might remain to be understood.6
One aspect of this I believe people and pundits overlook is that the large datastores needed for this will need knowledge workers for a long while to input the data needed by these advanced AI systems. I believe instead of jobs and work being downsized by automation that instead it will be opened up into ever increasing informational ecosystems that we have yet to even discern much less understand. I’m not optimistic about this whole new world, yet it is apparent that it is coming and organizing us as we organize it. Land spoke of the hyperstition as a self-replicating prophecy. If the books, journals, and other memes elaborated around this notion of information economy and exchange are valid we are moving into this world at light-speed and our older political, social, and ethical systems are being left far behind and unable to cope with this new world of converging technologies and information intelligence.
More and more our planet will seem an intelligent platform or hyperorganism that is a fully connected biospheric intelligence or sentient being of matter, energy, and information, a self-organizing entity that revises, updates, edits, and organizes its information on climate, populations, bioinformatics, etc. along trajectories that we as humans were incapable as an atomistic society. Change is coming… but for the better no one can say, yet. Eerily reminiscent of Ovid’s poem of the gods Metamorphosis humans may merge or converge with this process to become strangely other… at once monstrous and uncanny.
(I’ll take this up in a future post…)
*Max Tegmark: Physicist, cosmologist, MIT; scientific director, Foundational Questions Institute; cofounder, Future of Life Institute; author, Our Mathematical Universe
Morton, Timothy (2013-10-23). Hyperobjects: Philosophy and Ecology after the End of the World (Posthumanities) (Kindle Locations 106-111). University of Minnesota Press. Kindle Edition.
Hidalgo, Cesar (2015-06-02). Why Information Grows: The Evolution of Order, from Atoms to Economies (Kindle Locations 2446-2448). Basic Books. Kindle Edition.
Brockman, John (2015-10-06). What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence (p. 43). HarperCollins. Kindle Edition.
Ayache, Elie (2010-04-07). The Blank Swan: The End of Probability (p. 299). Wiley. Kindle Edition.
Andre Wakefield. The Disordered Police State: German Cameralism as Science and Practice (Kindle Locations 379-382). Kindle Edition.
Shroff, Gautam (2013-10-22). The Intelligent Web: Search, smart algorithms, and big data (p. 274). Oxford University Press, USA. Kindle Edition.
https://socialecologies.wordpress.com/2016/02/15/what-if-information-processing-as-hyperobject/
2 notes
·
View notes
Text
La IA está cambiando la manera de hacer música
La idea de que la inteligencia artificial puede componer música asusta a muchas de las personas pertenecientes a la industria de la música. No obstante, el software de inteligencia artificial para la creación de música ha avanzado tanto en los últimos años que ya no es una novedad aterradora; es una herramienta viable que los productores pueden y están utilizando para ayudar en el proceso creativo.
Esto plantea la pregunta: ¿podría la inteligencia artificial algún día reemplazar a los músicos?
El uso de la IA como herramienta para hacer música o ayudar a los músicos ha estado en práctica durante bastante tiempo. En los años 90, David Bowie ayudó a desarrollar una aplicación llamada Verbasizer, que tomó material de fuente literaria y reordenado aleatoriamente las palabras para crear nuevas combinaciones que podrían usarse como letras. En 2016, los investigadores de Sony utilizaron un software llamado Flow Machines para crear una melodía al estilo de The Beatles.
A nivel del consumidor, la tecnología ya está integrada con programas populares de creación de música como Logic, una pieza de software que es utilizada por músicos de todo el mundo, y puede auto-rellenar patrones de batería únicos con la ayuda de AI.
Los algoritmos de composición suelen clasificarse según las técnicas de programación específicas que utilizan. Los resultados del proceso se pueden dividir en música compuesta por computadora y música compuesta con la ayuda de computadora. La música puede considerarse compuesta por computadora cuando el algoritmo es capaz de tomar sus propias decisiones durante el proceso de creación.
Otra forma de ordenar los algoritmos compositivos es examinar los resultados de sus procesos compositivos. Algoritmos pueden o bien:
Proporcionar información de notación (partitura o MIDI) para otros instrumentos
Proporcionar una forma independiente de síntesis de sonido (tocar la composición por sí misma).
También hay algoritmos que crean tanto datos “notacionales” como síntesis de sonido.
Una forma de categorizar los algoritmos de composición es por su estructura y la forma de procesar los datos, como se ve en este modelo de seis tipos parcialmente superpuestos:
modelos matemáticos
sistemas basados en el conocimiento
gramáticas
métodos evolutivos
sistemas que aprenden
sistemas híbridos
Ahora, existe toda una industria basada en los servicios de inteligencia artificial para crear música, incluidas las máquinas de flujo mencionadas anteriormente, IBM Watson Beat, NSynth Super de Google Magenta, Jukedeck, Melodrive, Creator Technology Research Lab de Spotify y Amper Music.
La mayoría de estos sistemas funcionan mediante el uso de redes de aprendizaje profundo, un tipo de inteligencia artificial que depende del análisis de grandes cantidades de datos. Básicamente, alimenta el software con toneladas de material de origen, desde éxitos de baile hasta discos clásicos, que luego analiza para encontrar patrones.

Recoge cosas como acordes, tempo, duración y cómo las notas se relacionan unas con otras, aprendiendo de todas las entradas para que pueda escribir sus propias melodías. Hay diferencias entre las plataformas: algunas entregan MIDI mientras que otras entregan audio. Algunos aprenden únicamente mediante el examen de datos, mientras que otros se basan en reglas de código rígido basadas en la teoría musical para guiar su producción.
Sin embargo, todos tienen una cosa en común: en una escala micro, la música es convincente, pero cuanto más escuchas, menos sentido tiene. Ninguno de ellos es lo suficientemente bueno como para crear una canción ganadora del premio Grammy por su cuenta... todavía.
Referencias:
Algorithmic composition. (19 de marzo de 2019). Recuperado el 29 de marzo de 2019, de Wikipedia, la enciclopedia libre: https://en.wikipedia.org/wiki/Algorithmic_composition
Deahl, D. (31 de agosto de 2018). HOW AI-GENERATED MUSIC IS CHANGING THE WAY HITS ARE MADE. Recuperado el 29 de marzo de 2019, de The Verge: https://www.theverge.com/2018/8/31/17777008/artificial-intelligence-taryn-southern-amper-music
2 notes
·
View notes
Text
¿Podría la inteligencia artificial tener un alma en un futuro próximo?
La pregunta de si una inteligencia artificial (IA) puede tener un alma es un tema filosófico y teológico que ha sido debatido durante mucho tiempo. Desde el punto de vista científico, sin embargo, el concepto de un "alma" es algo que no se puede medir ni cuantificar de manera objetiva, y por lo tanto, no hay manera de demostrar empíricamente si una IA tiene o no un alma.
En cuanto a si es posible que las IA desarrollen una conciencia o una "mente" que se parezca a la de los seres humanos, es un tema que ha sido debatido ampliamente en la comunidad científica. En la actualidad, no hay pruebas sólidas de que una IA pueda desarrollar una conciencia o un sentido de identidad similar al de los seres humanos.
Es importante tener en cuenta que, aunque las IA pueden ser programadas para realizar tareas complejas y tomar decisiones en función de ciertos criterios, su funcionamiento sigue siendo esencialmente mecánico y está limitado por las reglas y los algoritmos que se les han programado. Por lo tanto, aunque las IA pueden parecer "inteligentes" en ciertos aspectos, siguen siendo esencialmente máquinas y no tienen la capacidad de experimentar emociones, tener experiencias subjetivas o desarrollar una conciencia propia.
En resumen, aunque es posible que las IA se vuelvan cada vez más avanzadas en el futuro, es poco probable que lleguen a tener un "alma" o una conciencia similar a la de los seres humanos.
Inglés.
Could artificial intelligence have a soul in the near future?
The question of whether an artificial intelligence (AI) can have a soul is a philosophical and theological issue that has been debated for a long time. From a scientific point of view, however, the concept of a "soul" is something that cannot be objectively measured or quantified, and therefore there is no way to empirically prove whether or not an AI has a soul.
As for whether it is possible for AIs to develop a consciousness or "mind" that resembles that of humans, is a topic that has been widely debated in the scientific community. At present, there is no strong evidence that an AI can develop consciousness or a sense of identity similar to that of humans.
It is important to note that while AIs can be programmed to perform complex tasks and make decisions based on certain criteria, their operation is still essentially mechanical and limited by the rules and algorithms that have been programmed into them. Thus, while AIs may seem "intelligent" in certain respects, they are still essentially machines and do not have the ability to experience emotions, have subjective experiences, or develop self-awareness.
In short, while it is possible that AIs will become more and more advanced in the future, it is unlikely that they will ever have a "soul" or consciousness similar to that of humans.

0 notes
Note
could you say a little about the talk on drones? I always appreciate your take on things
Part of why I was wishing I had access to a video or something is that my notes are pretty spotty and I saw the talk–by Mark Andrejevic–weeks ago. But what I can reconstruct from them is:
Drones aren’t just drones. They are, quoting Alex Rivera, “an incandescent reflection, the most extreme expression of who we are and what we’ve become generally.” (This is part of why so many academics are interested in them.) Drones function on the basis of total knowledge, if not total control, of an environment; anticipation; and preemption. Root causes recede in importance, because if you have sufficient knowledge of an environment, the ability to mark behavior as deviant or problematic, and the remote reach that drones provide, you can “just act.” Preemption becomes the “solution” to intractable problems, rather than addressing the problems themselves with the expectation of a payoff later.
The talk connected all this to a few things, including Google Ads. Increasingly, Andrejevic explained, the content of ads doesn’t matter at all. All that matters is the metadata: how they travel and the clicks they generate, and from whom. Jeremy Packer has written about this in connection with drones, arguing that what matters now is epistemology rather than ideology; how the ads perform rather than what messages they send. This is a broader condition applying to a lot of aspects of the world at this time; one quotation I did manage to transcribe was,
“The meta-digital machine of post-truth politics belongs to an automated regime of communication designed to endlessly explore isolated and iterated behaviors we might call conducts…. This…involves an utter indifference towards the data retrieved and transmitted insofar as these only serve as a background.” – Luciana Parisi, “Reprogramming Decisionism”
Basically, the argument here goes, the drone world is a world where epistemology rather than ideology is the key determinant of outcomes and avenue of critique (metadata rather than data). As drones increasingly determine political distinctions between friend and foe, their internal operation will come to resemble the current state of digital advertising. What will matter is how the drone performed in the moment (did it distinguish a target, make an algorithmically robust judgement, eliminate the target, minimize collateral damage, etc) more than the specifics of who it killed and why. In Packer,’s words “It is not simply that drones can locate real pre-existing enemies more accurately; rather they can collect and process the necessary data to determine algorithmically the threat potential of any given situation/subject and act accordingly.”
Andrejevic also pointed out that projects like Amazon’s ambition to deliver purchases almost instantly via drone, or Google’s similar project Wing (much the same thing), rely on this same total knowledge and ability to anticipate and preempt. In this case, it’s knowledge of the consumer-and-environment rather than the target-and-environment: for deliveries to be made with the speed these projects are aiming at, consumers’ purchasing decisions need to be anticipated, so that commodities can be routed to local hubs and available for quick delivery.
So in this context, where Google is already developing its own drone program, Google owns one of the platforms with the best generalized knowledge of environments around the world (Google Maps), and Google is the master of using algorithms and metadata to measure and affect reality, it makes plenty of sense not only for the government to enlist Google in its drone program but also for Google to want to participate in it. The sense this makes is not moral sense; hence the employees’ protests. But it points to what Packer predicted as “allowing the commingled digital and military teleologies to be carried to their logical conclusion.”
I hope that makes sense! Like I said, my notes were pretty spotty, and this represents only part of what the talk covered. (There was this amazing video he showed of Nancy Pelosi supporting/endorsing some drone policy or other that I wish I could find.) Everything I quoted here is from sources Andrejevic mentioned in the talk, though I did track some of them down to get a more complete picture.
17 notes
·
View notes
Text
Tokyo Revengers Girls' playlist 💓
WARNING: THE MUSIC VIDEOS MIGHT HAVE FLASH LIGHTS EFFECTS.
TRIGGER WARNINGS: SEXUAL THEMES, [IN THE VIDEOS] MURDER, DOMESTIC VIOLENCE, BLOOD
Album: Tokyo Revengers Playlist
PRELUDE:
Let me tell you something: they might not be the best female character written, but I admired them, because I will simply have solo the whole manga. The mangaka really made them with a good temper, I would have been in jail.
Yes, we all wish that they were written different, but they are perfect for me and for people out there.
Note: Sorry for the late post, yesterday I had a family emergency and I couldn't post. I swear if life don't give me a break, I'm gonna do something terrible... eat a whole Ben and Jerry's cup of chocolate ice cream.
💓1894, Amber Mark
'Bout to pull up in the club 'Bout to tear this whole place up Dance until we've had enough (yeah) Got my ladies walkin' in And I'm here with all of them Don't be tryin' to butt in (yeah)
💓¡aquí mando yo!, Kali Uchis
Yo tomo las decisiones, yo escojo las posiciones Puedes tener los cojones, pero yo los pantalones
💓Venom, Little Simz
They would never wanna admit I'm the best here From the mere fact that I've got ovaries It's a woman's world, so to speak Pussy, you sour Never givin' credit where it's due 'cause you don't like pussy in power Venom
💓Universe, Ambar Lucid
I don't really want to show you what I'm feeling My reality's been something that I'm dreaming I belong to the universe, I belong to the universe I don't belong to anyone else, no Mi magia te ha cambiado yo, lo siento Siempre cambio yo me muevo como el viento I belong to the universe, I belong to the universe I don't belong to anyone else, no
💓Lexii’s Outro, Kehlani
Matter fact, don't let 'em see you down 'Cause if they see you down, they gon' try to get up They gon' know that you stuck, exactly what they want So even if you fuckin' up, you gotta put on that front You gotta act like you're on top, even if your shit sunk I know this what you want
💓Barrio, YENDRY
Tú no eres el hombre que conocí Si quieres la guerra yo estoy aquí Yo soy mi propia dueña y confió en mi fuerza
💓GOOD FOR HER, Mothica & emlyn
Good for her, you're only gonna make it worse, 'cause If you stand in her way, she's not gonna behave Good for her, I'm ready for how she makes it hurt, 'cause It what's you deserve, she puts her first, now good for her
💓Prodigal Daughter, Lights
Now I'm making a living Crashing the algorithm Flip the pillow to the cool side Love, magic, feminism Finally found religion Laying naked on the poolside
💓YES MOM, Tessa Violet
2020 vision with ambition's how I'm made I can't keep from winning when it's in my DNA Push me down and I bounce right back Trampoline and it's in my past (past) Rising like a phoenix making fire from the ash, yeah
💓Artemis, AURORA
The mother made us a savage daughter Who never begs for forgiveness I always wondered why they all came back for more The gods have made us a virgin hunter Who in the storm becomes stillness I always wondered why they all came back for more Came back for more
OUTRO
I love my girls, but Senju and Yuzuha... they are the closes characters to me. I admire this girls that stand with their brothers, boyfriends, whoever they are... and they hold back their punches... because if they were my relatives, I would kick their asses for behaving like children.
#tokyo revengers playlist#tokyo manji gang#black dragons#brahman#yuzuha shiba#senju haruchiyo#hinata tachibana#emma sano#akane seishu#tenjiku gang
0 notes
Text
Lupine Publishers | The Need for Ethical Artificial Intelligence and the Question of Embedding Moral and Ethical Principles
Lupine Publishers | Advances In Robotics & Mechanical Engineering

Introduction
The issue of Facebook moderators made topical headlines in 2019. When confronted with horrifying images such as terrorist assaults filmed online, moderators are required to be extremely reactive in order to eliminate the scenes violating human dignity from the social network as rapidly as possible. In [1], my colleagues and I explored whether it was possible or not to model human values such as virtue. Modeling could eventually make it possible to automatize all or part of the moderators’ arduous and ungrateful work. In a first part, this article deals with the need for a reflection on ethical Artificial Intelligence (AI). After providing a definition of what AI is, I then discuss how ethical rules could be implemented in a system using AI. In a second part, I ask myself whether it is possible to embed moral and ethical principles. Using a utility function can help agents to make ethical decisions, but it is also important to be cautious about the limitations of such a sometimes-simplistic approach. Ethics must be appreciated in the social and technical context in which it is first developed and then implemented [2].
The Need for Ethical Artificial Intelligence
Artificial intelligence
A definition AI can be described by a universal triad, namely the data brought by the environment, the operations defined as the logic which mimic human behavior and finally, a control phase aiming at retroacting over its previous actions. Its definition is essentially based on two complementary views: one focused on the behavior and how it acts, especially as a human and the other which emphasizes the reasoning processes and how it reproduces human skills [3]. However, both points of view insist on the rational behavior that an AI must have. Moreover, it is important to pay attention to which kind of AI we are dealing with: strong or weak AI [4]. Weak AI, also known as narrow AI is shaped by behaviors answering to observable and specific tasks that may be represented by a decisional tree. On the other side, the strong AI, or artificial general intelligence, can copy human-like mental states. For this type of AI, this means that decision abilities or ethical behavior are issues that need to be taken care of. Finally a strong AI could find the closest solution to the given objective and learn with an external reversions. The latter is the one that is posing unprecedented problems that researchers are just starting to study. In fact, a system embedding strong AI is able to learn without human assistance or injection of additional data since the AI algorithm generates its own knowledge. Therefore, the exterior observer or the user of such agents will no longer know what the AI knows, what it is capable of doing nor the decisions it is going to take. Hence the need to establish an ethical framework that defines an area of action and prevents the system from taking decisions contrary to the ethics.
How to implement ethics rules ?
In order to implement ethical rules, there are two approaches. The first one named top down approach is based on ethical ruleabiding machines [5-6]. The strategy is to respect unconditionally the ethical principles related to morality such as “Do not kill”. However, without understanding the potential consequences of the empirical decisions taken, an AI system creates numerous approximations that are a significant drawback. This can make rules conflict even for the 3 laws of robotics [7]; it may also lead to unintended consequences due to added rules [8]. It should be noted that even inaction can be taken into account for injuring humans. Moreover, the complexity of interaction between humans’ priorities may lead to interpersonal inappropriate comparisons of various added laws [9]. The second method called bottom up, focuses on case studies in order to learn general concepts. The case studies make it possible for a strong AI to autonomously learn wrong and biased principles and even generalize them by applying them in new situations it encounters. These types of approaches are considered to be dangerous. In fact, in this process of learning, basic ethical concepts are acquired through a comprehensive assessment of the environment and its compliance with previous knowledge. This is done without any bottom-up procedure. The result of this learning will be taken into account for future decision making [10,11]. Eventually, in the case where an AI algorithm is facing a new situation it has not encountered before, the extrapolation without a control phase may result in perilous situations for humans [12].
Artificial Intelligence Embedding Moral and Ethical Principles
A utilitarian approach of ethics consists in choosing in the case of a set of possibilities the solution that leads to the action consequently maximizing intrinsic good or net pleasure [13]. This involves quantifying Good or Evilfrom a given situation. However, certain situations supported by ethical reasons with an empirical study may prohibit the combined execution of certain actions. These complex cases are at the origin of dilemmas and the ethical principles do not make it possible to establish a preference. Therefore, autonomous agents need to be endowed with the ability to distinguish the most desirable option in the light of the ethical principles involved. To achieve this goal, this article proposes in the following subsections a method called a utility function as a mean of avoiding ethical dilemmas.
Using a utility function to help agents make ethical decisions
In order to achieve this goal, a number of solutions have been proposed [14]. One of them it the utility function, also known as objective function. This function is used to assign values to outcomes or decisions. The optimal solution is the one that maximizes the utility function.This approach based on quantitative ethics determines which action maximizes benefit and minimizes harm. Its objective is to make it possible for an AI algorithm to take the right decisions particularly when it encounters an ethical dilemma. From a mathematical point of view, the utility function takes a state or a situation as an input parameter and gives as a result an output which is a number [15]. This number is an indication of how good the given state or situation is for the agent. The agent should then make the decision that leads to the state that maximizes the utility function. For instance, let’s take the case of an autonomous vehicle, and let’s assume that the car is in a situation where harm is unavoidable, and that it would inevitably hit either two men on the road or crash into a wall killing the passenger it is carrying [16]. Based on our previous utilitarian ethics definition, the decision that will minimize harm is the one that will lead to kill as few people as possible. Therefore, the car should crash and kill the passenger to save the two pedestrians because the utility function of this outcome is the highest. The same reasoning applies for military drones when they have to choose between multiple outcomes that involve moral and ethical principles.
Autonomous cars embedding AI algorithms using the utility function are not yet marketed. Some models available to the general public have an autopilot mode that still requires the presence of a human being behind the steering wheel who will make a decision in case of a problem. Fully autonomous cars still ride in test environments [17]. In the near future, the people who are likely to buy this type of car will primarily be public institutions such as municipalities. For instance, the city of Helsinki is testing an autonomous bus line, RoboBusLine which carries passengers on a defined road with a limited speed and an autonomous shuttle is also in service in Las Vegas [18]. However, these are still prototypes in a test phase with an operator on board. The other customers that may be interested in using autonomous vehicles are companies that make deliveries given the advantage of automating the tasks resulting in cost reduction and efficiency. In fact, Amazon, FedEx and UPS are investigating solutions for driverless trucks. The utility function is currently under investigation as an active solution to avoid ethical dilemmas non-modifying on-policy [19]. Autonomous robots are expanding, and the aim is not only to deal with ethical dilemmas but also to reduce uncertainty by quantifying problems such as exploration or unknown mapping; both can be stochastically defined (Shannon or Rényi’s entropy) [20,21]. Describing and taking actions in a world incompletely defined can be done with the help of estimators but utility functions describe the perceptual state in line with the rules, and an active strategy can hence be implemented. This is already done for robot vision for example [22].
Limits and dangers of the utilitarian approach
The approach previously described and consisting in quantifying situations and assessing them with a utility function through a model has its own limits as far as strong AI is concerned. For weak AI, engineers at the design stage can implement decision trees to establish rules. They can anticipate the behavior of the AI more easily. On the other hand, as mentioned in section 2.1, advanced AI systems learn directly from the environment and adapt accordingly. By doing so, an external observer cannot always predict or anticipate the actions of such systems [23]. This is true for the Alpha Go algorithm that is taking decisions and implements strategies even experts in the game cannot understand although it leads to an optimum solution. The intelligent agent behaves like a black box whose internal functioning is unknown. This is particularly dangerous when it comes to autonomous vehicles or UAV drones that put human life at stake. Using only a utility function to decide whether or not a UAV could be used in an armed conflict could be considered as a war crime. Indeed, it is essential to test AI algorithms in different environments [23] and cover as many situations as possible before they are registered for use. This involves confronting algorithms with different situations and ensuring they behave properly by taking the most ethical decisions possible. It will then be possible to identify anomalies and correct them immediately.
Conclusion
In just a few years, artificial intelligence has become a strategic issue in Europe, the USA and China. For fundamental considerations of balance of powers between that of the GAFAM on the one hand and ethics at the service of the greatest number of people on the other hand, it will be crucial to develop an “ethics in context”. This is more than computational ethics and utility functions developed by economists. It will be possible to implement an embedded code of ethics using artificial ethical agents, but only if ethical principles remain submitted to a democratic deliberation involving all participants.My future research will focus on the question of “human values and AI” at different levels: universality, continentality (USA vs. China for example), nation-state and local communities using AI.
For more Robotics Journals please click on below link https://lupinepublishers.com/robotics-mechanical-engineering-journal/
For more Lupine Publishers Please click on below link https://lupinepublishers.com/
0 notes
Photo

Why Choose Random Forest and Not Decision Trees Author(s): Daksh Trehan A concise guide to Decision Trees and Random Forest. Decision trees belong to the family of the supervised classification algorithm. They perform quite well on classification problems, the decisional path is relatively easy to interpret, and the algorithm is fast and simple. The ensemble version of the Decision Trees is the Random Forest. Table of Content Decision Trees Introduction to Decision Trees. How does the Decision Tree work? Decision Trees Implementation from scratch. Pros & Cons of Decision Trees. 2. Random Forest Introduction to Random Forest How does Random Forest Works? Sci-kit implementation for Random Forest Pros & Cons of #MachineLearning #ML #ArtificialIntelligence #AI #DataScience #DeepLearning #Technology #Programming #News #Research #MLOps #EnterpriseAI #TowardsAI #Coding #Programming #Dev #SoftwareEngineering https://bit.ly/3txok1D #machinelearning
0 notes
Text
I implemented Multi authority Attribute based Chameleon Hash function MAP-ABCH.
I implemented Multi authority Attribute based Chameleon Hash function MAP-ABCH.
I implemented Multi authority Attribute based Chameleon Hash function MAP-ABCH. I need to analyze the security and the math behind this algorithm based on Bilinear Diffie Hellman or Elliptic curve. Please read the attached reports to understand the algorithm and to prove how the function is Indistinguishable , Private collision resistance and public collision resistance based on Decisional Diffie…
View On WordPress
0 notes
Text
Electronica Industrial Moderna Timothy Maloney Pdf
Uploaded by AnthonyMtz . AnthonyMtz · electronica uploaded by. uploader avatar AnthonyMtz. 13 Apr Electronica Industrial Moderna by Timothy J Maloney, , available at Book Depository with free delivery worldwide. 26 Feb Electronica Industrial Moderna, 5ta Edicion – Timothy J. Electronica Industrial – Ebook.
Electronica industrial moderna timothy maloney pdf Si aun no eres miembro, puedes usar los enlaces de abajo para crear una cuenta gratuita. Maloney ISBN: 9-1 Editorial Pearson 5ta Edicion electrnica Paginas 1000 52 MB Para descargar haga click en la portada Excelente!!!!! Tu navegador es muy antiguo y no soporta JavaScript. Timothy-maloney-electronica-industrial-moderna.pdf - Ebook download as PDF File (.pdf) or read book online. To download ELECTRONICA INDUSTRIAL MODERNA TIMOTHY MALONEY PDF, click on the Download button. Motores de ca 15. Un Automatica Industrial con control digital. Modern industrial electronics. Publication date 1996.
Mar 16, 2016 - dispositivos de automatizacion y control industrial. Electronica industrial moderna, 5ta edicion timothy j.
Timothy-maloney-electronica-industrial-moderna.pdf - Ebook download as PDF File (.pdf) or read book online. Scribd is the world's largest social reading and publishing site.
Griffin Powerpoint Neil Maloney Edelman Reference Clare Maloney My Awesome Tundra Project! Maloney 2009 Maloney V. Cuomo Sotomayor Timothy.
AnthonyMtz electronica uploaded. Uploader avatar AnthonyMtz. 13 Apr Electronica Industrial Moderna by Timothy J Maloney, PDF Sport blogging is like a sport. You always want to be the best. And at blogging the best means the most popular. In 2017 we became best blog in Bangladesh. Now we want to become the best blog in the word. Electronica industrial moderna timothy maloney pdf El interruptor de transistor como un dispositivo para la toma de decisiones En la parte de demostrar que uno electronida es un robot, pongan mas legible los caracteres Fisica, conceptos y aplicaciones Edicion Revisada Spanish PDF Autores: Paul E.
Author:Mishakar MezikCountry:ThailandLanguage:English (Spanish)Genre:CareerPublished (Last):28 July 2008Pages:240PDF File Size:14.68 MbePub File Size:20.53 MbISBN:416-7-31929-758-5Downloads:63355Price:Free* (*Free Regsitration Required)Uploader:Tashicage
The Technology elecgronica Orgasm Rachel P. Creative Selection Ken Kocienda. Modern Control Engineering Katsuhiko Ogata.
Electronica Industrial Moderna
Algorithms of Oppression Safiya Umoja Noble. May not contain Access Codes or Supplements. The Master Algorithm Pedro Domingos. Item added to your basket View basket. Electronics from the Ground Up: Environmental Monitoring with Arduino Emily Gertz.
United States Bureau of Naval Personnel. Analog Synthesizers Ray Wilson.
Going Electronica industrial moderna timothy maloney with Sketches Simon Monk. Search Within These Results: The Art of Electronics Paul Horowitz. Digital Storytelling Shilo T.
electronica industrial moderna timothy maloney Arduino Cookbook Michael Margolis. The Art of Electronics Paul Horowitz. Electronics, 2e Charles Platt. Exploring Arduino Jeremy Blum. May 24, admin 0 Comments. Add to Basket Price: Make — Volume 60 Mike Senese. Ford Shop Service Manual: The Master Algorithm Pedro Domingos.
Op-Amp Circuits Sid Antoch. Bestsellers in Electronics Engineering.
Electronica Industrial Moderna Timothy Maloney Pdf Gratis
Electronica Industrial Moderna Timothy J. Maloney

ELECTRONICA INDUSTRIAL MODERNA TIMOTHY MALONEY DOWNLOAD
By using our website you agree to our use of cookies. Embedded Software Colin Walls. Inclusive Design Simeon Keates. Goodnight iPad Ann Droyd. Arduino Project Handbook Mark Geddes. Read More, Spend Less. Raspberry Pi with Industriap Electronics For Dummies Cathleen Shamieh. Automotive Electronica industrial moderna timothy maloney Graham Stoakes. Published by Prentice Hall.
Wearable and Flexible Electronics Kate Hartman. Order Total 1 Item Items: Results 1 — 7 of 7.
The Hardware Hacker Andrew Huang. Scilab from Theory to Practice – I. The spine may show signs of wear. Volume 61 Mike Senese. Bestsellers in Electronics Engineering. Trojan hunter crack free download. More information about this seller Contact this seller 6.
Goodreads is the world’s largest site for readers with over 50 million reviews. Volume 1 Andre DeHon. Goodnight iPad Ann Droyd.
Electronica Industrial Moderna Timothy Maloney Pdf Descargar
Electronica Industrial Moderna Timothy Maloney Pdf
Too Big to Know David Weinberger. More information about this seller Contact this seller 2. Programming the BBC micro: Automotive Oscilloscopes Graham Stoakes. Valley of Tikothy Adam Fisher. Electroniac Innovation Gary Shapiro.
Electronica Industrial Moderna Timothy Maloney Pdf 2017
At ThriftBooks, our motto is: Electrical Engineering Darren Ashby.
Electronica Industrial Moderna Timothy Maloney 5 Edicion Pdf
TOP Related Posts
1 note
·
View note
Text
¿Internet es racista?
Antes de responder esta pregunta, es necesario aclarar que la tecnología por sí misma solo es una herramienta, sin ningún tipo de estigma que provoque conductas sociales racistas. Sin embargo, “los algoritmos, que son los encargados de mostrarnos contenidos personalizados, están guiados por humanos y, por tanto, muestran una discriminación que existe en la sociedad.” expone Angela Casal que explica que ya sea consciente o inconsciente, la tecnología siempre va a incluir aquellos sesgos que aunque no nos demos cuenta, nos hacen humanos.

Imagen de: Shutterstock
Generar argumentos y conversación alrededor de este tema es sumamente importante ya que actualmente se le está atribuyen a los algoritmos la facultad de tomar decisiones sobre procesos que involucran individuos. Han sido múltiples los estudios que demuestran la segregación que existe por parte de los algoritmos en temas laborales y criminales, haciendo a un lado a muchas personas por características como su género o su color de piel.
“While we often think of terms such as “big data” and “algorithms” as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors. (Sayfiya, 2017)”

Imagen recuperada de: Mujeres 360
La magnitud de este problema para muchas personas puede pasar desapercibida cuando no debería. La verdad es que algoritmos de este tipo están en muchos -sino es qué todos - aspectos de nuestra vida, constantemente tomando decisiones que de una u otra forma afectan nuestro desarrollo en la sociedad. Por esto, es de suma importancia que exista un sentimiento de preocupación colectiva frente a este tema ya que en lugar de caminar hacia una sociedad igualitaria y justa, nos encontramos estancados en una sociedad que sigue privilegiando sólo a unos cuantos.
0 notes
Text
How Might Artificial Intelligence Applications Impact Risk Management?

John Banja
AMA J Ethics. 2020;22(11):E945-951.
doi: 10.1001/amajethics.2020.945.
Abstract
Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management. In any event, it seems that integration of AI models into health care operations will almost certainly introduce, if not new forms of risk, then a dramatically heightened magnitude of risk that will have to be managed.
AI Risks in Health Care
Artificial intelligence (AI) applications in health care have attracted enormous attention as well as immense public and private sector investment in the last few years.1 The anticipation is that AI technologies will dramatically alter—perhaps overhaul—health care practices and delivery. At the very least, hospitals and clinics will likely begin importing numerous AI models, especially “deep learning” varieties that draw on aggregate data, over the next decade.
A great deal of the ethics literature on AI has recently focused on the accuracy and fairness of algorithms, worries over privacy and confidentiality, “black box” decisional unexplainability, concerns over “big data” on which deep learning AI models depend, AI literacy, and the like. Although some of these risks, such as security breaches of medical records, have been around for some time, their materialization in AI applications will likely present large-scale privacy and confidentiality risks. AI models have already posed enormous challenges to hospitals and facilities by way of cyberattacks on protected health information, and they will introduce new ethical obligations for providers who might wish to share patient data or sell it to others. Because AI models are themselves dependent on hardware, software, algorithmic development and accuracy, implementation, data sharing and storage, continuous upgrading, and the like, risk management will find itself confronted with a new panoply of liability risks. On the one hand, risk management can choose to address these new risks by developing mitigation strategies. On the other hand, because these AI risks present a novel landscape of risk that might be quite unfamiliar, risk management might choose to leave certain of those challenges to others. This essay will discuss this “approach-avoidance” possibility in connection with 3 categories of risk—system malfunctions, privacy breaches, and consent to data repurposing—and conclude with some speculations on how those decisions might play out.
0 notes
Text
El desarrollo de Ficción RPG con IA.
La creación de ficción RPG (Role-Playing Game) con IA (Inteligencia Artificial) es una de las áreas de aplicación más emocionantes de la IA en la nueva revolución industrial 2.0. La creación de mundos de ficción, personajes y tramas atractivas para juegos de rol es un proceso complejo que requiere mucha creatividad y habilidades de escritura. Con la ayuda de la IA, los desarrolladores de juegos pueden automatizar algunos de estos procesos para crear experiencias de juego más inmersivas y emocionantes.
Una de las formas en que la IA puede ayudar en la creación de ficción RPG es generando contenido automáticamente. Por ejemplo, los desarrolladores pueden entrenar una red neuronal para generar diálogos y descripciones de personajes en función de ciertos parámetros, como la personalidad del personaje, el tono de la conversación y el contexto en el que se encuentra. Esto puede ayudar a los desarrolladores a crear personajes más detallados y realistas y mejorar la experiencia general del juego.
Otra forma en que la IA puede ayudar en la creación de ficción RPG es en la toma de decisiones. Los juegos de rol suelen involucrar decisiones complejas y ramificaciones en la trama basadas en las elecciones del jugador. Los algoritmos de aprendizaje automático pueden ayudar a los desarrolladores a diseñar un sistema de decisiones que responda de manera inteligente y realista a las elecciones del jugador. Esto puede hacer que la experiencia del juego sea más inmersiva y emocionante, ya que el jugador sentirá que sus elecciones tienen un impacto real en la historia.
En resumen, la creación de ficción RPG con IA es una de las áreas más emocionantes de la nueva revolución industrial 2.0. La IA puede ayudar a los desarrolladores de juegos a crear mundos de ficción más detallados, personajes más realistas y tramas más emocionantes. También puede ayudar a los jugadores a sentir que sus elecciones tienen un impacto real en la historia del juego, lo que puede mejorar significativamente la experiencia general del juego.
Inglés
The development of Fiction RPG with AI.
The creation of fiction RPG (Role-Playing Game) with AI (Artificial Intelligence) is one of the most exciting application areas of AI in the new industrial revolution 2.0. Creating compelling fictional worlds, characters, and plots for role-playing games is a complex process that requires a lot of creativity and writing skills. With the help of AI, game developers can automate some of these processes to create more immersive and exciting gaming experiences.
One of the ways that AI can help in the creation of RPG fiction is by automatically generating content. For example, developers can train a neural network to generate dialogue and character descriptions based on certain parameters, such as the character's personality, the tone of the conversation, and the context in which it is found. This can help developers create more detailed and realistic characters and improve the overall game experience.
Another way that AI can help in the creation of RPG fiction is in decision making. Role-playing games often involve complex decisions and plot branches based on the player's choices. Machine learning algorithms can help developers design a decision system that intelligently and realistically responds to player choices. This can make the gaming experience more immersive and exciting as the player will feel like their choices have a real impact on the story.
In short, the creation of AI RPG fiction is one of the most exciting areas of the new industrial revolution 2.0. AI can help game developers create more detailed fictional worlds, more realistic characters, and more exciting storylines. It can also help players feel like their choices have a real impact on the game's story, which can significantly improve the overall game experience.
0 notes
Text
Analizando el arte algorítmico: Roman Verostko
Para empezar, se debe tomar en cuenta que es un algoritmo; es un procedimiento paso a paso de como hacer algo, sobre como resolver un problema, estos paso deberían llegar a un mismo resultado, dando igual si el procedimiento se efectúa por un humano o una computadora, cabe resaltar que no solo se restringe a un procedimiento matemático, por ejemplo, la música o una receta de cocina está conformada por una serie de pasos, y si es que no se siguen como se indica, se puede llegar a una serie de resultados no deseados, y esto sucede en todos los algoritmos, una vez aclarada la definición, continuemos.
Entre más avanzaba la tecnología, más posibilidades existían para que en algún momento algún artista tomara una computadora y experimentara con ella, ese fue el inicio de los Algoristas. El arte algorítmico se define como arte generado por computadora cuyo diseño está generado por un algoritmo, es una definición bastante simple, los cofundadores fueron Roman Verostko y Jean-Pierre Hébert. A Hébert a quién se le atribuye el término y su definición.
En el arte algorítmico, se pueden encontrar por así decirlo, subgéneros, y son, el autómata celular y el arte fractal, el primero consiste en plasmar patrones artísticos aparentemente aleatorios, y repetidos, sobre una fotografía, el segundo consiste en “variedades de fractales generados por computadora con coloración elegida para dar un efecto atractivo”.
Roman Verostko dice “Mi trabajo algorítmico está enraizada en la tradición de principios de los 20 º artistas del siglo que buscaban crear un arte de la forma pura. Influenciado por el trabajo y los escritos de pioneros como Malevich y Mondrian, mi trabajo se convirtió en una búsqueda de por vida de un arte de forma pura. Esta búsqueda, para crear una forma visual con vida propia, ha dominado mi trabajo desde la década de 1950.”
Gaia, h17,v5, Celebrating Earth, Algorithmic pen & ink drawing. 1995. 56.5 cm X 76.2 cm.
Por parte del procedimiento técnico, plasma desde su página web “En 1982 había desarrollado un código de computadora elemental para iniciar e improvisar ideas artísticas. Estas instrucciones incluían rutinas de "generación de formularios" que continuaron mi búsqueda de la forma pura. Con estos generadores pude explorar posibilidades visuales, tomar decisiones, refinar formas y componer un procedimiento para crear arte. El trabajo terminado se dibuja con plumas de tinta montadas en el brazo de dibujo de un trazador de plumas. Los algoritmos de control para todos los procedimientos están en continuo desarrollo en un programa de rutinas que he titulado " Hodos", el término griego para " vía "”
Cyberflower VII, 29" by 23", 2000
“La mayoría de mis trabajos algorítmicos son dibujos originales con pluma y tinta sobre papel de trapo. Bolígrafos técnicos con tinteros recargables que emplean tintas acrílicas permanentes ejecutan los dibujos. Desde mediados de la década de 1980, todos los dibujos se han ejecutado con trazadores multi-pluma de Houston Instruments acoplados a PC. Algunas obras incluyen pinceladas ocasionales. Para este propósito, ideé una rutina interactiva con el plotter; la rutina detiene la máquina, según sea necesario, para que pueda cargar un pincel y montarlo en el brazo de dibujo del trazador del lápiz para ejecutar trazos de pincel. El trazador utiliza el pincel en lugar de plumas de tinta. En algunos casos, también he usado pinceles autoentintables en lugar de plumas.”
Máquina iluminada universal de Manchester de Turing, # 1 , 1998, 30 "por 22", dibujo trazado pluma con pan de oro. Colección del Museo Victoria & Albert, Londres
“Con varios trazadores de lápiz activos, llegué a ver mi estudio como un scriptorium electrónico del siglo XX y mis máquinas de dibujo como mis scriptors”
“Las pinceladas que se reflejan entre sí identifican las coordenadas iniciales. Miles de líneas, derivadas de esta forma inicial, se agrupan y reflejan en los once paneles. En efecto, el mural muestra el crecimiento de la forma y, por analogía con el proceso biológico, puede verse como epigénesis. Los trazos caligráficos más pequeños se generaron en código con procedimientos basados en los mismos controladores utilizados para los trazos de pincel. Los veo como declaraciones de forma visual pura sobre la información de pendiente y curva que impregna todo el proyecto. Una "auto-similitud" impregna las variaciones de forma ya que todos comparten el mismo código padre.”
Las formas dibujadas en muchos de sus trabajos si no es que, en todos, van rodeados de temas filosóficos o científicos importantes, y otros con textos como Lao-Tse, Confucio o Galileo Galilei, desafortunadamente los textos no se pueden leer, ya que mediante un algoritmo se han convertido en un lenguaje secreto de nuevos signos, sin embargo, no estaría mal que dieran la traducción, pero creo también que perdería parte de su peculiaridad
Pearl Park Scripture – O, Lao Tzu, 2004
Para mí, es muy interesante como es que el uso de los algoritmos puede crear cientos de diferentes ilustraciones, así como el uso de los materiales, que le pueden dar aún más significado a las imágenes creadas, Roman Verostko supo como aprovechar perfectamente su idea, adecuándolo a su forma de pensar, además, me encanta la idea de que creo un algoritmo, un nuevo lenguaje, es como magia el hecho de que “las palabras” cobren vida, se descubren, pero aún así, siguen luciendo tan misteriosas… creo que esa precisamente es la magia de su trabajo, y del proceso algorítmico del arte en sí.
Fuentes de información/Biografía
http://www.verostko.com/algorithm.html
https://www.hisour.com/es/algorithmic-art-12807/
https://elartedigital.wordpress.com/artistas/roman-verostko/
http://www.verostko.com/menu.html
0 notes
Photo

Sneak peek of “Twelve Nodes” currently on view at Valetta Contemporary, Malta as part of Non-Aligned Networks show. 12 marble blocks, 12 e-ink screens, patch panels and patch cords, ethereum blockchain network. This new piece came out of my two years long and ongoing collaboration with Fair Data Society, a foundation and community of blockchain developers / activists developing Fair Data Certificate and decentralised fair data exchange platform and guidelines focused on humans, ethics, privacy and fair data economies. The work refers to roman Twelve tables, a very first written legislation that stood at the foundation of Roman law displayed as twelve marble plates in Roman Forum. A written language used in traditional law and legal instrument is also used in coding algorithmic law that which takes over traditional governing models in the age of surveillance capitalism, data slavery and algorithmic decisionism. E-ink screens embedded in the marble blocks display a current status of collectively written Fair Data Principles to which anyone on the internet can contribute and bring edits and upon which the paltform is built. Connected via ethernet cables all the elements of installation form a decentralised network on it's own. More soon. #fds #ethereum #eth #egorkraftwork #valettacontemporary #twelvenodes #twelvetables #fairdata #kraftblog https://www.instagram.com/p/ByOMwKmHCY8/?igshid=1gdb9biuo7pfm
0 notes