Text
youtube
The Fukushima disaster was the most severe nuclear crisis in 21st century. Despite this, it provided valuable opportunities for human beings to learn and improve robotic technologies in handling mega disasters.
It also showcased the importance of technological coperations between advanced countries to make better robots to survive lethal hazardrous environments.
The missions are still valid for Fukushima's nuclear plants and contaminated zones.
The challengies encountered by the robots deployed in the Fukushima provided insights for countries to learn how to build robots friendly entry and movements in high hazards risks environments in the future as it is very certain that humans will rely on robots to complete many suicidal missions.
In this documentary, several insightful lessons can be drawn for future development of robotic technologies in carrying out dangerous missions.
Durability and resistance of robotic building materials and components in hazardous and hostile environment.
Manuveoulabilty and flexibilty of robots in different tricky and challenging building settings.
Increasing the safety distance between human robot pilots and the actual disastrous zones for reduction of human exposures to harmful environments.
Reducing the risks that robots are 'killed' in the sucidal missions (therefore reducing the costs of resending replacement robots).
Development of more powerful and effective rescue robots to retrieve the lost robots in the hazard zones.
Ideally, increase the plausibility and possibility of recycling the dead robots, even they are contaminated. At the end of the day, in a mega nuclear crisis, there are already MANY wastes created that take geology timescale to be fully degraded. If human beings can reduce more undegradable wastes in the after crisis LONG years of treating the conteminated sites, it helps the planet.
Lessons learnt from Fukushima can be inspirational to future improvements of robotic technologies in other rescue missions, such as earthquakes, landslides, building collapses, floodings, avalanches, vocano eruptions, fires, and wars zone rescues. Each different difficult environments will provide new learning opportunities in continous innovations and improvements.
0 notes
Text
Artificial Justice-after watch note
It is not really that fictitious in current reality.
A very authoritarian president, PM dominated regime surrounded by a bunch of big techs whose CEOs want to be the kings behind kings making profits by grabbing powers.
The first thing first they want to attack is judicial independence.
Whoever wants to go against the currents in the judicial system will be first targets to be purged.
The beginning signs are all there in USA under Trump. A judge being arrested because she went against the political purges against 'illegal' migrants.
In the movie, the big tech did not disclose how the AI alogrithms were designed to make 'moral decisions' for the 'greater good'. It is a commercial secret. What the big tech refused to disclose to the public is that VIP subscribers can opt for 'priority justice' in the 'moral' deductive calculations SO LONG AS the users did not betray the interests of the big tech. If not, the big tech vested interests WILL NOT hesistate to kill their own CEO by hacking into her AI automated driving system.
In the name of 'efficiency', judicial justice only serves the justice of whom? The embryo of such techno autocracy has become reality in USA because Elon Mask and his company.
And what about democracy? It is only the democracy of the big techs that can convince whatever politicians and their administrations and government agencies to FORCE upon their employees to endorse the agendas or they will be canned. In this aspect, it is a full reality in USA.
0 notes
Text
youtube
In the near future, the Government aims to replace judges with Artificial Intelligence software, pledging to effectively automate and depoliticise the justice system. Carmen Costa, a distinguished judge, has been invited to assess this new procedure. However, when the software’s creator is found dead, she realizes her life is in danger and that she will have to fight the powerful interests that are at play in the highest echelons of the State.
The future of 'efficiency' in 'elimination of bureaucratic red tapes' or a future of techno-autocracy?
Will you accept a verdict based on the recommendations of AI alogrithms developed by big techs giants FOR profits making and power grabbing?
2 notes
·
View notes
Text
Out-of-step with Christian Democracy
Contrary to the project of Christian Democracy in Europe, the Christian Right functions as an antagonist to secular politics and pluralism and as a reactionary force vis-à-vis liberal democracy. The Christian Right in Europe is therefore not to be regarded as some “residue” of Europe’s historical Christianity. Instead, it is a highly politicized form of conservative Christianity that draws strength and sustenance from both the globalization of the American culture wars and Russia’s illiberal ambitions for Europe, trading traditional Christian democracy for a more radical, polarized version of politics.
Online Spaces and the Christian Right
The Christian Right use the power of social media to mobilize, recruit, and spread their ideology across Europe and beyond.
The Far Right has shown its eagerness to replace democracy and the rule of law with an autocratic system by using religion as a decisive instrument. This is not only evident in the numerous interviews and documentaries by Tucker Carlson with Vladimir Putin in Russia or at CPAC with Viktor Orbán in Budapest, Hungary, but it has also resulted in clear policy consequences. The conclusions drawn from these observations and the experiences from Europe are also reflected in Project 2025.
Political and Religious Challenges
In the global culture wars, the critical divide is not denominational but political. The rise of the European Christian Right poses challenges to mainstream churches and religious traditions.
Religiously, they pose a risk of successfully hijacking church discourse. They may, in turn, take control of weakened conservative institutions and spaces, steering them towards a more extreme right-wing course. (Note: this is profoundly evidenced in the case of GOP in USA and the Trump administrations.) This may also led to tensions within mainstream churches and religious communities, revealing deep-seated conflicts over values that cut across religious, cultural, and national lines.
The Christian Right in Europe is a growing force challenging the secular and pluralistic status quo of European societies. By building on interdenominational collaborations and utilizing both private and public funding, including influences from U.S. organizations and Russian support, they increasingly succeed in pushing their conservative and illiberal agenda. Their influence on political parties and policymaking has brought a significant shift in the political landscape, ultimately destabilizing democracy in Europe.
#Christian Right#democracy#Donald Trump#Elon Musk#Far right politics#autocracy#authoritarianism#Christianity
0 notes
Text
Book Reading: Eastern Perspective Humanistic AI 4C: The onslaughts of AI on human minds
USA and China dominate the designs of AI technologies in their global races of technological supremacy and domination. However, the creation, operations, designs and ownerships of AI technologies are in the hands of big technology companies that put profits over everything, as well as vested interests and state actors who pursue political agendas and missions.
Recent classic example is Elon Musk and Trump who weaponise Starlink to force Ukraine to accept political agreement that sacrifices Ukraine's interests for the sake of secret division of power between USA and Russia.
The Chinese Communist Party is notoriously known to weaponise AI technologies to exert full scale of state techological survaillance and mind controls of their citizens at micro level.
The contents and information generated by these two countries' vested interests are to seduce people into their information traps in order to mold people's ideologies, value systems, worldviews, life views, entertainment views, and religious views. People's are being controlled and manipulated subconsciously and naturally without awaring the impacts.
The 'world' people can see, the 'knowledge' people are taught are carefully, precisely calculated and set up in the alogrithms of the AIs. The AIs become people's addicted flavourable magic mirrors that can always generate tailored personal preferences according to their cyber space consumptions (browsings) profiles.
As people fall deeper into the ILLUSION traps of reality, the more they are disconnected with the facts of reality. The more and more they think they are making real choices out of their freewill whilst they are being set up and used by the big vested behind the screens for reinforcing the latter's political or economic benefits.
Trump and Elon Musk's agendas and their behaviours were clearly reflected during Trump's second term. At the political dinner meeting between Trump admin senior officers and ALL the big social media companies' bosses, they gathered to show loyalty to Trump. Amazon's boss immediately responsed by 'reshaping' the opinion page of Washington Post for 'promotion of freedom'. Yet the 'freedom' Amazon wants to promote is to create, shape and lead opinion spinning FOR Trump and his government.
The collusion between political and technocrats' profits and power seeking desires to achieve their own agendas were not hidden as Trump returns political flavours to Elon Musk by turning the White House into a giantTesla showrooms. Over years, Musk's business receive $21 billion from US government but Musk claimed he 'helped' Ukraine by providing Starlink 'free' to them insteading of mentioning the ULTIMATE GENUINE price Ukraine is now foreced to pay is TOTAL surrendering of 50% of their critical resources to USA. Of course, Musk's enterprise will have FREE access to these critical natural resources to make his satellites, electronic vehicles as Musk becomes the de facto CHIEF staff of the White House.
Despite this, these technocrats still have many users and followers. The personal postings of Elon Must attracted millions of views despite his political stance changed radically in the last couple of years.
How Elon Musk weaponised X against Ukraine’s president Zelensky
AIs possess invisible influcences on the functioning of human minds. AI technologies don't just shape and reinforce people's specific preferences, vested interests weaponise AIs to INLFUENCE people's political judgements in order to reshape politics, economic and social landscapes to fit the agendas of the vested interests. In additional to political influences, AIs can impact many aspects of people's daily life through the vasted information (SET, DESIGNED, FILTERED, AND CENSORED) by the owners and creators of the AI alogrithms to assist people's decisions and judgements. AIs can replace human decisions makings. These are not fictitious, these have been reality as the rise of AIs are an unreversable trend to displace human jobs and functions across different industries globally.
More and more, the vested interests behind the big AI techs have successfully convinced people to believe that human minds are not capable to overcome the intelligence of robots. Natural intelligence of humanity cannot surpass supermachines' intelligence without the 'assistance' of the supermachines.
This becomes an onslaughts on human capabilities to make judgements as people become more relying on AIs to process vast quantities of big data to calculate. The reliance and addiction to AIs will gradually dwarf people's INDEPENDENT THINKING because the big vested interests who own and create AIs indoctrinate people to 'forget' and 'escape' the 'difficulties' of deep thorough considerations and evaluations BEFORE making decisions. They lead people to believe that it is 'more convenient' and 'easy' for people to skip the 'troubles' and 'difficulties' of the THINKING, EVALUATING, CONSIDERING and JUDGING processes. Such abilities that make human beings suprass ALL other living creatures to be INTELLIGENT creatures.
Instead, we are getting used to and comfortable to hand over the natural process of NATURAL HUMAN intelligent working to AIs because we are 'inferior' species who have emotions. Emotions affect our judgements. Machines do not have emotions, therefore they are 'better' decision makers.
Yet the big techautocrats will not say emotionless AIs treat humans as machines instead of understanding and honouring HUMANITY. Their designs are not always humane centered.
They won't tell people that as people are REMOVED from the PARTICIPATION of decision-making process, people are diminished and dispowered for being DEPRIVED of the NEEDS of feeling needed.
As AIs progressively replace and displace humans jobs, human beings will lost dignity of feeling constructive and productive. In this sense, AIs designed to replace and displace human minds' NATURAL intelligence do not really 'free up' people to more 'meaningful' works. People are simply being deprived of having works and meaningful participation in works. People become unproductive if they cannot find other jobs that allow them to exercise HUMAN ABILITIES.
Gradually, displaced human beings will be degraded into uncivilised people not different from beasts because of idleness as described by Mencius (Mengzi) in the first book of dialgoues with Téng wéngōng (an emperor of Teng State during the Spring and Autumn period). As most people displaced by AIs or willingly surrenders HUMAN MINDS natural intelligence works to AIs in order to 'escape' "challenges" of works into 'easy and comfortable' life, human beings will lost the motivation to learn and improve. The motiviations that had been driving PROGRESSIONS of human civilisations since human histories began.
0 notes
Text
Book reading: Eastern Perspectives Humanistic AI 4b: the ethical challenges of AI information manipulation
The impacts of AI on human minds are not just about the designs of AI alogrithms or programs can be abused to manipulate human thinkings, perceptions and thereby decisions made by humans. AI can unlimitedly feeds and appease humans' desires through its abilities to possess huge amount of big data to understand human natures. Like opium addiction, AIs can reinforce the feeds of whatever humans like through the traits of human activities left in the cyberspaces via computers, tablets, smartphones and all smart devices embedded with AI trackings, especially via IoAs.
"Wherever we browse, we leave footprints." Our cyber footprints become oceans of big data held in the hands of big techs' for machine learnings. Human beings can be addicted to the traps of continously AI feeds for unlimited persuance of lusts, powers, money and frames. Humans become slaves in these traps. We forget and forgone the seek of higher levels of self-actualizations.
The more humans rely on AI feeds to satisfy our lusts and desires, the more we tend to think that this is the ONLY meaning of life. We think we can use WHATEVER means to meet our desires. We stop questioning whether we really ONLY need lower level of happiness or just that the designers of AIs THINK that this is the ONLY way to provide happiness. (Refer to the happy machine experience in part 1).
When human beings fall into the illusion of happy machine experiences, we are manipulated and controlled by AI. We are seen through by the designers of AI who take advantages of human weaknesses to drive us into endless pursuance of self-reinforcing lusts.
The Serpant only showed the lustful temptations of the forbidden fruits of ONE tree to cause downfall of humanity. AI stimulates unlimited temptations to people (from young to old ) of the entire 'universe' in the cyberspace. We are exposed to multiple temptations from the cyber new serpents without even aware of their existences.
Public opinions are getting more easier to be fabricated, manipulated, and controlled by vested interests as illustrated in many elections and societal events. Manipulation of the public's perceptions become standard tools of fooling the people by malicious governments and politicians.
It becomes more challenging for people to maintain clear, rationale and unbiased mind to make judgements, especially for younger generations who grow up with smartphones taking grant every content they watch as the 'truth'. As a result, our world is more radical and divided.
Is it a pitty for this generation not equipped with adequate information literacy and critical thinking to tackle the challenges of AI's tremendous capabilities to manipulate information?
0 notes
Text
Book Reading: Eastern Perspectives Humanistic AI 4a: Reflections on what it means to be human in the advent of AI era
The Ethics of AI and Human Values
With the advancements of AI technologies and abilities, it seems that human beings are facing a present reality and future possibilities of delegating everything to AI performing. A lot of things that human used to exercise our freewill and intelligence to make judgements will be assisted by AI. As AI possess vast quantities of data to compute and to continuosly improve by deep learning, it is very possible that AI can acquire self-conciousness.
If AI acquires self-consciousness, will it able to obtain the abilities to make moral decisions judgements out of ongoing deep learning?
On the other hand, as human beings become more and more relied on AI to perform tasks and make decisions, it is inevitable that many natural born human characters will be weakened. For example, humans are humans because we have FREEDOM, especially freedom from the perspective of MORAL self-discipline.
Human beings should do good for the sake of benevolence out of self-willed proactivity instead of passively driven to. Paradoxically, humans find it difficult to practice such morality out of freewill and self-constraints.
Since human beings can't always make the self-initiated goodwill to do good, some people consider that it may be 'better' to handover the moral questions to AI.
But the problem is once human beings GIVEN UP the HUMAN NATURE responsibilities to make moral considerations and decisions, human beings will stop enhancing and strengthening our moral consciousness.
Human dignity comes from the ABILITIES AND WILLINGNESS to overcome sensational desires and lusts to do good for higher and greater good. It is through human beings self-sacrifice to achieve higher and greater good that makes us shine as dignified species.
If human beings give up these characters, virtures and the abilities and willingness to make moral evaluations, judgements and decisions and hand the decision making to AI simply because we cannot overcome the difficulties and challenges involved in the decision making process, human rationality and autonomy will be harmed.
Those who study humanity, philosophies, especially those who study Chinese philosophies should constantly reflect the risks.
In this article, the author Cho-Hon Yang advocated for insights from Confucianism, Taoism, and Buddhism's value systems of seeing human beings as balanced creatures who possess both spiritual (the invisible form as Way (Tao) and physical bodies (as vassels). The world is harmonised when human beings can stay on 'the Way' rather than just being vassels without spirits. As spiritual creatures who can seek and stay with 'the Way', we can't treat other humans as tools or instruments to justify means. From this worldview, AIs created by humans can't just be there to feed unlimited desires of humans simply because humans want to satisfy the desires of the bodies (fleshes/vassels). i.e. AIs shouldn't be just some alogrithims being designed to feed human desires like the fat chocolate factory boy. "Tao" or the Way is the highest compass that guide professions of humans BOTH spiritually and physically. Only when AIs being developed with capabilities of improving BOTH the spirit and bodily needs of human beings are truly benefitial to the future developments of humanity.
Confucianism, Taoism, and traditional wisdoms emphaise the balance of 'heaven', 'earth' and human beings, as well as the balance of inner aspects and external aspects of human beings. When applying such concepts to the designs of AI, it may help to harmonise technological progression and holistic human developments.
In addition, the teachings of Confucianism focus on personal growth, and socio-moral responsibilities. Taoism's concept of Tao (the Way) and human relationships with Tao and Buddhism's emptiness princple (human self-constrains of falling into unlimited desires just for the sake of self-pleasures) provide spiritual and moral guaidance. Embedding these concepts to the designs of AIs will provide a more integrated approach that facilitates balance of technological advancements in harmony with individuals and society.
1 note
·
View note
Text
Booking Reading: Eastern Perspectives Humanistic AI 3: From human-AI master-slave dialectic to X.A.I
Convenience of fully enjoy AI services or surrendering of human choices
The advancement of AI technologies make people increasingly believe that AI can understand what humans like better than humans. AI can better understand human emotions better than ourselves. AI can know your inclination of value systems deeper than ourselves. AI can comprehend what is happening around the world and how everythings are related, connected and correlated far better than human beings. Therefore AI can establish goals, resolve problems and provide suitable recommendations better than humans. AI can take more timely and effective executions of decisions.
It seems like human beings are not only giving away but "should surrender" all the rights of choices, decision makings and rights of actions to AI "eventually". At the end, AI can even FULLY replaced human beings as living species in the progress of evolution. e.g. the singularists are pushing this.
The final outcome of the human-AI master-slave dialectic is more than AI will become masters of human beings because AI DOES NOT need human beings. AI can achieve better than what humans can accomplish.
The dilammna humans face is whether IF we want to fully enjoy the services of AI, we SEEM don't have any reasons to surrender our rights to control AI by retaining our own rights of choices.
Whether IF we want to retain our rights of choices and the controls over AI, we 'can't' enjoy the full services of AI?
The dialectic becomes CAN human beings enjoy the full services of AI WITHOUT giving up our controls over it and our rights of choices, decision makings and actions?
Such question can be rethought/rephrased in this context:
Can AI trustworthy to humans (i.e. it won't cause existential threats human beings) when human beings delegate some of our tasks and as we delegate more functional tasks, decision recommendations and decisions to it?
Trustworthy AI begins with transparency and explainability because it is how human beings learn and understand from our physical world. Referring to my previous blog post, human beings possess high level of natural intelligence capabilities. Technologies are process of humans' intelligence works. We LEANT by asking questions, creating theories, models, hypothesises, metaphors to help us understand the complexity of our world through simplificaiton of the complixicities.
The way human minds learn, reason, comprehend the world is NOT the way machine learns.
As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result or decision. The whole calculation process is turned into what is commonly referred to as a “black box" that is impossible to interpret. These black box models are created directly from the data. And, not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them or how the AI algorithm arrived at a specific result.
Unless human beings can unlock these black boxes, we can't build trust with the AI.
X.A.I (Explainable AI)-the key to open AI blackboxes
As mentioned above, we have natural intelligence. We rely on our rationality to make sound judgements and decisions. We demand and expect whoever recommend decisions for us or actually make decisions on behalf of us to answer CLEARLY and CONVINCINGLY:
Why did you make/recommend this decision?
Why didn't you choose different or recommend differently?
When do you succeed? And WHY?
When do you fail? And WHY?
Is there no bias? And WHAT is the bias?
How can we trust you?
Unless human beings are satisfied all the answers from such points, there is no trust or no full trust.
X.A.I is AI built with explainable models and explainable interfaces with human beings embedded as part of the machine learning process and decision recommendations process so that AI CAN TALK TO human beings in ways human beings can understand by answering these questions:
We understand why and why not
We know we can trust you
We can be sure that there is no bias
We know when (and why) you succeed
We know when (and why) you fail
Most importantly, WE KNOW WE CAN CORRECT AND IMPROVE YOU if we find anything wrongs with you.
In other words, XAI tries to bridge this gap by providing insights into how AI systems work, making them more accessible and user-friendly. As a result, it contributes to increased user engagement and a better understanding of model behavior.
It leads to improved trust, increased user confidence, better predictive power and prediction accuracy, accountability, fairness, and collaboration between humans and Artificial Intelligence.
Fundamental XAI principles
For the above reasons, XAI must be based on these principles:
Interpretability – the ability to generate understandable explanations for their outputs,
Transparency – visibility and comprehensibility of the inner workings,
Trustworthiness – confidence among human users in the decision-making capabilities and making sure that the results are reliable and unbiased.
Inclusiveness
X.A.I assists in building interpretable, inclusive, and transparent AI systems by:
implementing tools explaining models for these tools,
detecting and resolving bias, drift, and other gaps.
As a result, it equips data professionals and other business users with insights into why a particular decision was reached.
In fact, in certain use cases, such as healthcare, finance, and criminal justice, decisions made by AI algorithms can have significant real-world impacts. XAI helps us understand how these decisions are made, building trust, transparency, and accountability.
Source: https://10senses.com/blog/why-do-we-need-explainable-ai/#:~:text=It%20leads%20to%20improved%20trust,our%20introduction%20to%20XAI%20here.
X.A.I and responsible AI
Explainable AI also helps promote model auditability and productive use of AI. It also mitigates compliance, legal, security and reputational risks of production AI.
Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability. To help adopt AI responsibly, organizations need to embed ethical principles into AI applications and processes by building AI systems based on trust and transparency. Source: https://www.ibm.com/think/topics/explainable-ai#:~:text=Explainable%20artificial%20intelligence%20(XAI)%20is,expected%20impact%20and%20potential%20biases.
This last point was concurred by the key message of the article written by Chung-I Lin that the establishment of an AI imbued with an inherent human perspective can help to foster a collaborative rational and communication PARTNER of human beings.
Such AI does not just only comprehend finess of humanity, but also collaborates with humans in actions that ultimately engendering a new ethical enviornment to foster genuine communication and collaboration between humans and AI.
In this way, the concerns regarding AI replacing humans would truly cease.
0 notes
Text
Russia’s covert influence involved using front organizations to funnel money to preferred causes or politicians, the cable alleges. That includes think tanks in Europe and state-owned enterprises in Central America, Asia, the Middle East, and North Africa.
The State Department took the unusual step of releasing a diplomatic cable that was sent on Monday to many U.S. embassies and consulates abroad, many of them in Europe, Africa and South Asia, laying out the concerns.
According to the cable, intelligence officials believe Russia planned to transfer “at least hundreds of millions more” dollars in funding to sympathetic parties and officials around the world.
0 notes
Text
An indictment filed Wednesday alleges a media company linked to six conservative influencers were secretly funded by Russian state media employees to churn out English-language videos that were “often consistent” with the Kremlin’s “interest in amplifying U.S. domestic divisions in order to weaken U.S. opposition” to Russian interests, like its war in Ukraine.
They have millions of followers online. They have been major players in right-wing political discourse since Donald Trump was president. And they worked unknowingly for a company that was a front for a Russian influence operation, U.S. prosecutors say.
The official noted allegations of Russian influence in recent elections in Albania, Bosnia and Montenegro, all Eastern European countries that have faced historical pressure from Moscow.
Note:
Albania was not part of USSR but it had a communist political system after WW2 (during when Italy and Nazi invaded Albania) until the fall of the communist regime in 1991. EU had opened accession negotiations with Albania in 2020.
Although not part of USSR, Bosnia was part of the former Socialist Federal Republic of Yugoslavia until it gained independence during the Bosnian war.
Similar to Bosnia, Montenegro was not part of USSR but it was part of the former Socialist Federal Republic of Yugoslavia until the break up of Yugoslavia. After which it became Serbia and Montenegro until Montengro declared independence in 2006. Serbia is a Russian friendly country.
The socialist and communist history in these 3 countries left behind a heritage of close political ties with Russia. This is why they have rooms for Russian influences and inteferences.
#russian interference#Conservative influencers#Ukraine war#foreign interference#propaganda#election interference#Donald Trump
0 notes
Text
Book Reading: Eastern Perspectives Humanistic AI-2b The contest between AI and natural intelligence
The questions about whether human beings are just sophisticated machines and therefore CAN BE and SHOULD BE fully displaced by AI prompt philosophers and technocracts to debate fundamental natures of human beings.
The pursuance of answers to these BIG QUESTIONS are within the studies of philosophers and theologians.
The philosophies of Aristotle's Doctrine of Mean; Immaunnel Kant; the theologies of Augustine were quotated by Jeu-Jeng Yuann to support his arguments for human minds has positive position for development of AI to preserve humanity and human dignity.
The Doctrine of Mean as balance to two extreme views
Technocrats' view represented by Elon Musk
Musk is not the only big technograt who hold such view, other iconic people are Zuckerberg's metaverse. They are gigiantic in development of AI technologies to support metaverses because they believe human beings can plausibly live in parallel universes (in the real physical world and the virtual metaverses). Eventually, human beings may not need a physical world to distinguish reality and virtual reality as EVERY HUMAN NEED can be satisfied by metaverse.
On the other hand, philosophers disagree with this on the basis that this kind of possibility won't be successful. Otherwise the FUNDAMENTAL MEANINGS for the existence human beings will cease. It will be the end of human dignity.
Yuann considered that those who believe metaverse WILL CERTAINLY succeed by displacement of human beings by AI and their needs in the physical world are too idealistic while those who insist that metaverse WILL CERTAINLY not successful are too assertively firm.
The extreme views of either side are bound to be flawed. Since philosophers generally believe in Aristotle's Doctrine of Mean, Yuann's personal view was that a mid-way stance was more reasonable.
What criterion must be met for full replacment of human beings by machines?
Based on the Doctrine of Mean, Yuann deduced 3 assumptions/criterion/conditions to exist simultaneously in order for full replacement of humans' natural intelligence by AI:
In whatever things human beings pursue, we ONLY aim at meeting our extrinsic needs and desires WITHOUT catering the intrinsic aspects of human desires. (i.e. the more complicated and advanced needs in Maslow's pyramid of needs.)
Achieving the ends are the ONLY reason for EVERYTHING humans conduct. (i.e. Not only the means to the ends are irrelevant, nothing else should be considered outside the ends. Everything becomes instrumentally based on utilisation values and capabilities. When machines and robots can FUNCTION better to achieve the ends than human beings, there will be NO MORE JUSTIFICATION for human's existence in those areas. In this sense, human beings' existence values are ALL AND ONLY based on functional contributions.) Assumption 2 can be seen as derivation from assumption 1 out of the ignorance and denial of human's inner complicated needs. This will reduce ALL our realisation and meaning of existence to something that can be ultimately satisifed by machines ONLY.
Denial of human dignity. (i.e. since human beings are not admitted to posses dignity, whether humanity should be prioritised or even preserved is NOT part of the consideration.) Assumption 3 is the natural fruit of assumption 2. IF human beings' existence is ONLY about functional values for efficiency and effectiveness, we are seen as nothing more 'organic machines'. There is no need to regard dignity.
Obviously, these views won't be agreeable by those who BELIEVE that human beings are MUCH MORE than machines.
The views about human nature as living beings It led to why the Kant and Augustine's philosopy and theology came into place to DEFEND humanity. Religions, in this case, Christianity, point to higher DIVINE existence values and MEANINGS of human beings. (We were made LIKE GOD according to HIS IMAGE out of HIGHER divine wisdom FOR having intimate relationships with God as a DIVINE BEING. We were not made AS machines to serve human program codes).
Philosphers emphaise the importance human dignity. Kant's view was that humans do not just have extrinsic aspects, we have instrinc sides. Technological advancements can't just focus on serving the ends. It is also necessary to combine the means and the ends together. While it is important to achieve the objectives, it is equally important to decide how the means work to meet the ends, especially complying with human wills and respectful of the human choices.
Basically, philosphers acknowledge the cognitive abilities of human beings, and the psychological and spiritual activities of human beings all working together to make us superior intelligent BEINGS. The translations of heart and soul in philosophical terms comprise of souls, minds and the cognitive intelligence of human minds. These qualities and abilities made us NOBLE creatures who deserve dignity.
The recognitions of these BASIC HUMAN NATURES contrast other school of philosophy proposed by Pierre-Simon Laplace.
Can everthing deterministic and predictable accurately in the universe?
The French mathematician and physicist Pierre-Simon Laplace proposed that if we know the precise position and velocity of every particle in the universe at a given moment, we could use the laws of physics to predict with perfect accuracy the future behavior of the entire universe, including the thoughts and actions of every human being.
Essentially, Laplace’s demon is an all-knowing, all-powerful entity that could calculate the future state of the universe based on its complete knowledge of the present state. (i.e. everything in the universe is predictable and calculatable by some super calculators.)
Laplace's demon
Laplace’s demon is important in both philosophy and science for several reasons:
It challenges our understanding of determinism and free will through raising important questions about whether the universe is entirely deterministic, and if so, whether free will is an illusion. If the universe is deterministic and the future is set, then our choices and actions may be predetermined, and the idea of free will may be an illusion.
Another implication is that it may challenge our ideas about moral responsibility. If our choices and actions are predetermined by factors beyond our control, then it may be difficult to hold individuals responsible for their actions, as they may not have had a genuine choice in the matter.
The idea that an all-knowing entity could predict the future of the universe with perfect accuracy raises important questions about the limits of human knowledge and our ability to understand the universe.
Laplace assumes that all events are predetermined by prior causes, leaving no room for human agency or free will. However, humans have the ability to make choices that are not necessarily determined by prior causes.
Laplace's demon and AI
Laplace's demon is often cited in discussions about the potential of AI to predict human behavior with increasing accuracy, and the implications of this for privacy, ethics, and human autonomy.
Here are some key ways in which Laplace’s demon impacts AI research:
The limitations of machine learning: many machine learning algorithms are based on the assumption of determinism. However, these algorithms are limited by the same constraints of knowledge and the complexity of the systems being analyzed. This can lead to inaccuracies in machine learning models and the potential for unforeseen consequences.
The role of human agency: as AI systems become more sophisticated, there is a risk that human agency could be diminished or marginalized, leading to a loss of control over important decisions.
The importance of ethical considerations: the development of AI and machine learning raises important ethical questions on privacy, security, and social inequality.
The limits of prediction: Laplace’s demon assumes that it is possible to predict the future with perfect accuracy. However, as AI systems become more complex, it becomes increasingly difficult to predict their behavior. This raises questions about the limits of prediction in science and the potential for unforeseen consequences. Source of Laplace's Demon theory and its impacts: https://medium.com/@michellerichardson_11188/exploring-laplaces-demon-determinism-free-will-and-the-limits-of-human-knowledge-57c3f94ebbe0
The complexities surrounding science, technologies and the moral consequences and social impacts they posted about the future of hamanity will become constant important issues challenging the future of AI technologies.
The Future of AI from philosophical and moral perspectives
In the end of Yuann's article, he suggested 3 important principles:
The development and advancement of AI in any industrial and commercial society is not just about technological advancements, it creates deep inter-relating threats to human ethics and moral concerns.
The cognitive and comprehensive activities of human minds contribute to the progression of technologies because such advancements are WORKS OF HUMANS' NATURAL INTELLIGENCE. As Augustine suggested, the cognitive capability of human mind is the source of human dignity. Putting preference of AI over human mind is to put the cart before the horse. Thefore, we should always strive to defend the cognitive capabilities of human mind and make the best of which to guard and preserve human dignity.
The manifestation of the power of human mind and soul is NOT JUST about technological advancements, it is also to maintain the integrity of such advancements towards a human-centred future.
#AI#machine learning#AI ethnics#Doctrine of Mean#Immanuel Kant#st augustine#Laplace's demon#applied ethics#applied philosophy
0 notes
Text
Book Reading: Eastern Perspectives of Humanistic AI 2a: Philosophical perspctives of the contest between AI and natural intelligence
The "Happy" Experience Machine and its derived philosophies
The basic questions from these schools of philosophy are around:
Can machine programmed VIRTUAL experiences set in the metaverses fully satisfied ALL happiness pursuance of human beings?
If so, can the VIRTUAL programmed machines' experiences fully replaced REAL LIFE experiences?
Does the pursuance of happiness become the SOLE pursuance of human needs?
Imagine you are the lead characters in the Matrix movie or Total Recall movie being presented with the following scenario described by psychologist and philosopher Joshua Greene, what will be your choice?
"You wake up in a plain white room. You are seated in a reclining chair with a steel contraption on your head. A woman in a white coat is standing over you. 'The year is 2659,' she explains, 'The life with which you are familiar is an experience machine program selected by you some forty years ago. We at IEM interrupt our client's programs at ten-year intervals to ensure client satisfaction. Our records indicate that at your three previous interruptions you deemed your program satisfactory and chose to continue.
As before, if you choose to continue with your program you will return to your life as you know it with no recollection of this interruption. Your friends, loved ones, and projects will all be there.
Of course, you may choose to terminate your program at this point if you are unsatisfied for any reason.
Do you intend to continue with your program?" (Source: Wikipedia)
In the second scenario, there is a machine through which you can plug into your brain that can provide whatever desirable or pleasurable experiences you want by stimulating your brain to induce pleasurable experiences that you cannot have in real life but give you pleasures that are indistinguishable from real life experiences.
Will you prefer the machine stimulations or real life experience?
Behind the hypotheis of the debate of a fundamental philosophical question:
What IS happiness?
Then WHETHER there is something justifiable as "ethical" (and moral) hedomnism by an IMAGINED CHOICE between happiness in everyday real life and an apparently preferable simulated reality?
The classical utilitarians "pleasure is the good", which leads to the argument that any component of life that is not pleasurable does nothing directly to increase one's wellbeing. Accordingly, ANYTHING that serves the means to increase one's happiness FEELINGS and PERCEPTIONS are 'good enough' to be justified. A theory that is behind EVERY invention of the technologies metaverse, virtual reality and augemented reality and many robotic technologies. (e.g. the famous robot lovers (HER) depicted in movies, science fictions and in Japan's case, machine stimulations to simulate the pleasures of sexual activities, etc.)
Robert Nozick reasoned WHY human beings may choose to reject the happy machine experiences basing on the arguments and conclusions that:
IF experiencing as much pleasure as we can is ALL THAT MATTERS to us, THEN we have no reason not to plug into the experience machine. ]
Experiencing as much pleasure as we can is NOT ALL THAT MATTER to us.
Nozick provides three reasons not to plug into the machine.
We want to do certain things, and not just have the experience of doing them. In the case of certain experiences, it is only because first we want to do the actions that we want the experiences of doing them or thinking we’ve done them.
We want to BE a certain sort of person.
Plugging into an experience machine limits us to a man-made reality (it limits us to what we can make). There is no actual contact with any deeper reality, though the experience of it can be simulated.
Through the hypothetical happy experiences machine, Nozick refuted the Hedonists' view that the things that will make a person happiest both long term and short term is the highest and greatest value everyone holds. Therefore, people may still refuse to be plugged in.
In order to provide some sort of empirical evidence where Nozick didn't verify, philospher Felipe de Brigard asked 72 US university undergraduates whether they would like to disconnect from the machine given that they were already in it.
About their "real" life, they were told one of three stories:
(a) nothing;
(b) that they were prisoners in a maximum security prison; or
(c) that they were multimillionaire artists living in Monaco.
Of those who were told nothing of their "real" lives, 54% wished to disconnect from the machine.
Of those who were told they were prisoners, only 13% wished to disconnect.
This implies that one's real-life quality impacts whether it is preferred to the machine.
Of those told they were rich inhabitants of Monaco, half chose to disconnect, comparable to the proportion given no information about their "real" life.
De Brigard attributes his findings to status quo bias. He argues that someone's decision not to step into the machine has more to do with wanting the status quo than with preference of the current life over the simulated one. (Source Wikipedia).
Why are these different views important in the context of the wrestling between AI and natural HUMAN intelligence?
Going back to Maslow's pyramid of human needs: from the most basic physiological, safety, social, rising up to the higher needs of self-esteem and self-actualisation. What makes an individual climb up the pyramid? Humane centred thoughts schools conclude that humans are INTELLIGENT AND SPIRITUAL beings, these NATURAL characters and instincts drive us to pursue higher level of needs. The natural cognitive intelligence of human beings are products of human minds, hearts, souls and spirits.
Since human beings have higher needs, the questions become HOW such needs are best served FOR the benefits of HUMANS?
Then it comes to whether, how and what human invented technologies, including AI, can BEST created and applied to SERVE HUMAN NEEDS AT THE TOP OF THE NEEDS PYRAMID.
The arguments and justifications for humanistic AIs are based on the fundamental thoughts that technologies are there to serve and assist humans to better met the higher levels of needs WITHOUT displacement of human values, identity and dignity.
Utaritarians hold a different view that as long as whatever and 'whoever' provides the best functional and utilitarian values, anything is justified as means to the end. Therefore, whether that 'whover' is a human being to achieve the ultimate ends DOES NOT matter. If a machine or any technology can do the jobs, utlimate full displacement of humans in the process doesn't matter.
Such thinking DOES NOT discern quantitative and qualitative differences between humans and machines.
#AI ethics#Robert Nozick#The Happy Machine#Joshua Greene#Felipe de Brigard#Maslow hierachy of needs#Humanistic Artificial Intelligence#Happiness#hedonism#utilitarianism
2 notes
·
View notes
Text
This is the beginning of Stalin-Putin-Mao's eras of political purge campaigns in USA's version of the new Cultural Revolution.
0 notes
Text
Book Reading: Eastern Perspectives Humanistic AI-1: How do we address the impact of artificial intelligence?
This book has a collection of articles written by AI experts, educators, philosophers and sustainability experts from Taiwan under the Forum of Dialogues between Humanity and AI and Humanistic centred AI Forum.
In the first article written by Mu-Chun Su, he "underscores the need for ethical standards and international cooperation to guide AI weapon development. Furthermore, his article addresses challanges in AI intertability, generalisation, and ethics."
He cited the 6C principles proposed by Ka-Fu Lee (former senior executives of Microsoft and Google):
Curiosity
Creativity
Critical Thinking
Collaboration
Communication
Confidence and Su's 7th principle
Cross-disciplinary
as essential intelligence building capability attributes for future generations of human beings to tackle the increasing challenges proposed by the advancement of AI.
Do we open up a treasures trunk or Pandora Box?
This is a constant challenging question for AI developers. It depends on how AIs are being developed and what kind of AIs are being applied. To be benevolent to human societies, it is necessary to invest more resources to nuture cross-disciplinary professionals and experts so that there are breakthroughs in research and innovations of AI technologies to resolve more global social issues such as environment, climate changes, sustainable developments, food productions, healthcare and social security, inequality gaps of poverty and quality of living standards, etc.)
It is necessary to strengthen inter-governmental cooperations in establishing better AI governance to ensure AI technologies are being reasonably and responsibly applied and to prevent harmful consequences of malicious usages of such technologies.
Note: Mu-Chun Su is a professor of Computer Science and Information Engineering, National Central University in Taiwan. He was granted numerous global awards in AI: IEEE Franklin V. Taylor Award; Top Researcher Award for International Research Awards on Statistical Methods for Analyzing Engineering Data. He is an IET Fellow.
0 notes
Text
DeepSeek spells the end of the dominance of Big Data and Big AI, not the end of Nvidia. Its focus on efficiency jump-starts the race for small AI models based on lean data, consuming slender computing resources. The probable impact of DeepSeek’s low-cost and free state-of-the-art AI model will be the reorientation of U.S. Big Tech away from relying exclusively on their “bigger is better” competitive orientation and the accelerated proliferation of AI startups focused on “small is beautiful.”
Wall Street share market reacted in the wrong way for the wrong thing.
Without Nvidia, Deepseek can't survive because they use specific Nvidia chips that are currently not under US ban.
3 notes
·
View notes
Text

Far rights (ECR, PfE and ESN) won 26% of seats.
At the opposite side of the spectrum the far left held 6% seats.
The far left and far right held a total 32% seats. This was an indication of how populism at both ends of the political specturm weild powers within EU.
0 notes
Text
Modern European populism has its roots in the 1990s, when the collapse of the Soviet Union and the eastern bloc fundamentally reshaped the political landscape of the continent. While some triumphantly proclaimed ‘the end of history’ – i.e. that liberal democracy and capitalism would now be unchallenged across the globe – in reality, older party systems were gradually destabilized by the absence of a common adversary. Italy was the country where this shift was the most noticeable early on, as the country was shaken by a corruption investigation which implicated all the major established parties.
Since the early 2010s, Europe has been rocked by successive waves of populist victories. These populist attacks came from both the right and from the left, showing that populism is not a specific ideological position, but rather a way of doing politics that can be used across the political spectrum.
In spite of populism’s enduring popularity, there remains a number of issues which populists struggle with. On coming to power, many formerly populist leaders have found that they are unable or unwilling to follow many of the promises made in their rhetoric.
Another issue which populists face is whether their appeals to national sovereignty can be an actual solution when faced with issues that require international and European coordination.
0 notes