Tumgik
#UKChips
thxnews · 1 year
Text
Government Annouces £1B Boost for UK's Semiconductor Sector
Tumblr media
  - National Semiconductor Strategy reveals plan to double down on design, research and advanced chip leadership – securing the UK’s position as a global science and technology superpower - chip plan sets out how UK will build on industry strengths, safeguard supply chains from disruption and protect tech against national security risks – supporting the delivery of the government’s priority to grow our economy - government will invest up to £1 billion in the next decade to improve access to infrastructure, power more research and development and facilitate greater international cooperation, with up to £200 million over the years 2023-25 - follows UK and Japan commitment to establish ambitious collaboration in the semiconductor sector, covering R&D cooperation, skills exchanges, and improving the resilience of the semiconductor supply chain  
Tumblr media
National Semiconductor Strategy. Photo by GOV.UK.   The National Semiconductor Strategy sets out how up to £1 billion of government investment will boost the UK’s strengths and skills in design, R&D and compound semiconductors, while helping to grow domestic chip firms across the UK. Working in tandem with industry, investment made by the government will drive research, innovation and commercialisation through the sector - helping to deliver products from lab to market. It comes as the Prime Minister is at the G7 leaders’ Summit in Japan for discussions on strengthening our tech collaboration with like-minded economies and strengthening supply chains for critical technology like semiconductors. Semiconductors are vitally important for the modern world we live in, being an essential component for the functioning of almost every electronic device we use. From phones and computers to ventilators and power stations, nearly every piece of technology in the world depends on them. Over a trillion semiconductors are manufactured each year, with the global semiconductor market forecast to reach a total market size of $1 trillion by 2030. Semiconductors also underpin future technologies, such as artificial intelligence, quantum and 6G. The strategy focuses on the UK’s particular areas of strategic advantage in the semiconductors sector – semiconductor design, cutting-edge compound semiconductors, and our world-leading R&D ecosystem - supported by UK universities from Cambridge to Cardiff and Manchester to Edinburgh demonstrating global leadership in this space. Compound semiconductors do things silicon chips can’t, with use cases in evolving technologies such as autonomous driving and future telecoms. Their creation requires expertise in advanced materials, an area of UK science leadership. To support the growth of the sector in the UK, the government will invest up to £200 million over the years 2023-25 to improve industry access to infrastructure, power more research and development and facilitate greater international cooperation. Taking a strategic approach to investment over the next decade, the government will invest up to £1 billion in a range of measures to secure the UK’s advantage in this globally important sector and meet three key objectives: - growing the domestic sector - mitigating the risk of supply chain disruptions - protecting our national security The strategy builds on the consistent support the government has provided for the semiconductor industry, having provided £539 million in grants for research and £214 million directly to SMEs in the sector across the last 10 years, as well as funding 450 PhD students since 2017.  
Tumblr media
UK Prime Minister working at his desk in 10 Downing Street. Photo by UK.GOV.   Prime Minister Rishi Sunak said: Semiconductors underpin the devices we use every day and will be crucial to advancing the technologies of tomorrow. Our new strategy focuses our efforts on where our strengths lie, in areas like research and design, so we can build our competitive edge on the global stage. By increasing the capabilities and resilience of our world-leading semiconductor industry, we will grow our economy, create new jobs and stay at the forefront of new technological breakthroughs.  
Tumblr media
Official portrait of Chloe Smith MP. Photo by Richard Townshend. Flickr.   Science, Innovation and Technology Secretary Chloe Smith said: Semiconductors are the beating heart of all electronic devices, from powering our phones and cars to medical equipment and innovative new technologies like Quantum and AI which will make a real difference to all of our lives. Britain is already a world leader when it comes to researching and designing semiconductor technology – our new strategy will double down on these core strengths to create more skilled jobs, grow our economy, boost our national security and cement the UK’s status as a global science and technology superpower.   The UK’s Integrated Review placed securing strategic advantage in science and technology at the heart of the UK’s national security and foreign policy. In recognition of the fundamental importance of semiconductor technologies in these areas, the National Semiconductor Strategy demonstrates a clear vision for our position in the sector. As part of the strategy, the UK will increase its cooperation with close partners, working together to manage national security threats and driving growth in the sector, while championing international cooperation to help develop a coordinated approach to supply chain resilience. In Hiroshima this week, the UK and Japan committed to establishing an ambitious semiconductor partnership, led by the UK’s Department for Science, Innovation and Technology (DSIT) and Japan’s Ministry of Economy, Trade and Industry (METI). It seeks to deliver new R&D cooperation, skills exchanges, and improving the resilience of the semiconductor supply chain for both countries. UK Research and Innovation will work with the Japan Science and Technology Agency on a joint investment of up to £2 million in early stage semiconductor research next year. This will support UK and Japanese researchers to work together on fundamental semiconductor technologies. The strategy has been developed in close consultation with the semiconductor industry and academia, and the government will build on this partnership by creating a new UK Semiconductor Advisory Panel. The Panel will bring together key figures from industry, government, and academia to work closely on shared solutions and implementation.  
Tumblr media
Srabanti Chowdhury. Photo by UC Davis College of Engineering. Flickr.  
Growing the UK industry
The government will focus on growing the UK’s unique and already world-leading strengths in compound semiconductors, research and development, intellectual property and design by investing up to £200 million over the years 2023-25, and up to £1 billion in the next decade. This funding will be used to improve the talent pipeline and will make it easier for British firms to access things like prototyping, tools and business support. These efforts will include investment in a new National Semiconductor Infrastructure Initiative to unlock the potential of British chip firms in these key areas. It will look at whether better access to prototyping facilities for chip firms is needed to tackle barriers to innovation and grow the industry. It will also explore opportunities to make specialist software tools more available for start-ups. The Department for Science, Innovation and Technology commissioned research that will look at the best way to establish the Infrastructure initiative. This will report its findings in the autumn. Furthermore, the government will announce plans by the autumn on support for investment in the semiconductor manufacturing sector, particularly where they are critical to the UK tech ecosystem or the UK’s national security.   Further announcements include: - a new UK Semiconductor Advisory Panel that brings together key figures from industry, government, and academia to work together to deliver the strategy. The Advisory Panel will speak on behalf of the sector and provide advice and feedback - a specialist incubator pilot will focus on removing obstacles which hold semiconductor startups back from growth. The scheme, launching today, will provide industry with better access to technical resources as well as coaching and networking - support for industry-led learning will ensure people can gain the skills the semiconductor industry needs. Programmes will provide opportunities for learning focused on the advanced skills needed for the sector, such as electrical and electronic engineering and computer science  
Safeguarding supply chains
The journey of a semiconductor chip from lab to market can involve thousands of production stages taking place across the world, with various locations that have particularly concentrated production capabilities. The surge in demand for consumer electronics during the pandemic demonstrated how global industries can be impacted by semiconductor supply issues. This strategy highlights the importance of collaboration with international allies to develop secure supply chain resilience. The government will take steps to help sectors mitigate the impact of supply shortages in the future. The UK government also wants to protect critical sectors (essential services, healthcare, critical national infrastructure and defence) from disruptions that could cause risks to life, or national security. To help ensure the UK is better protected against future disruption the government commits to: - mew guidance to be published to help businesses better understand risks and steps they can take to be more resilient against supply chain shocks - continued collaboration through international initiatives - like the UK’s technology partnerships with the US, Japan, and the Republic of Korea - to explore shared approaches and solutions to improve global supply chain resilience  
Tumblr media
GCHQ Building at Cheltenham, Gloucestershire. Photo by Defence Images. Flickr.  
Protect UK against security risks
Semiconductors can create vulnerabilities in the electronic devices they are used in, and these risks are becoming more significant as the use of internet connected devices increases. The government is clear that a compromise to the cyber security of the hardware behind every device powering modern life is not acceptable. The acquisition of chip firms can also present national security issues. The strategy announces actions to protect the UK against these security risks including: - additional information on the government’s approach to using the National Security and Investment Act, providing information to the industry on what areas of the sector the government has seen particular concerns potentially arising to ensure technology remains securely protected - The government will continue to support world-leading programmes like Digital Security by Design, which aims to ensure semiconductors can be more resilient and secure in the face of growing cyber threats   Sources: THX News, Department for Science - Innovation and Technology & The Rt Hon Chloe Smith MP. Read the full article
1 note · View note
lumanlife-blog · 7 years
Photo
Tumblr media
Legislation and Ethical Guidelines for Intelligence Technologies (LEGIT): principles for a more enlightened and civilized society
Abstract Recent technological advances to augment human intelligence (aka Intelligence Amplification or IA) can potentially allow us to make our cities and citizenry smarter than ever. However, their corruptive and disruptive impact on health suggests the information technology (IT) industry must establish an ethical framework to ensure our future generations get the most from life. To mitigate risks, a number of organizations have introduced various codes of ethics. Despite this positive move, most codes focus on enabling public access to data and professional integrity to the exclusion of all else. While both domains are important, we argue that they do not nurture the kind of intelligences humanity needs to thrive and prosper. To address these blind spots, this paper draws on recent evidence that three human factors (chronobiology, collaboration, creativity) are vital to humanity's future, and that harnessing them will ensure our IT professionals design more life-supporting systems. The 3 "Laws" presented as Legislation and Ethical Guidelines for Intelligence Technologies (LEGIT) aim to stimulate critical debate on the subject and nudge the sector to take practical and meaningful action.
The future of AI
The idea of artificial intelligence or AI has been around since the 1956 summer workshop at Dartmouth University. The workshop was convened by John McCarthy, who coined the term “artificial intelligence,” and attended by a raft of AI pioneers including Claude Shannon, Herbert Simon, and Marvin Minsky. This seminal event defined AI as a set of methods that could provide machines with the ability to achieve goals in the world. Attendees of the workshop believed that, by 2001, computers would implement an artificial form of human intelligence (Solomonoff, 1985). Recent advances in neural networks modelled on the human brain have resulted in striking breakthroughs, most of which involve a machine learning technique known as deep learning. Deep learning uses a photograph’s pixels as input variables to predict variables without needing to understand underlying concepts, just as standard regression model predicts a person’s income based on educational, employment, and psychological stats. Such algorithms are now found to beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars (Guszcza et al., 2017).
Due to more effective algorithms, computing power, data capture and storage, real-world AI applications have exploded in the last decade. AI systems are already built into everyday technologies like our mobile devices and voice-activated personal assistants to help us manage various aspects of our lives. AI is also being used within legal, financial, and workplace sectors to predict behaviours map leisure preferences (Campolo et al., 2017). In addition, thousands of digital health apps are being developed to help track our daily activities and prompt us to make healthier lifestyle choices (Topol, 2015). The problem is that AI algorithms only work when data used to train them are sufficiently reflective of the environment in which they are used. In other words, when routine tasks can be encoded in big data sets, algorithms have become brilliantly adept at outperforming humans. Yet when given a more novel task that requires conceptual reasoning, even the most powerful AI still cannot learn as well as a five-year-old does (Gopnik, 2017). This is because AI is founded on computer-age statistical inference — not on an approximation or simulation of what we believe human intelligence to be (Efron and Hastie, 2016). This kind of narrow type of machine learning is far from the vision outlined at the Dartmouth workshop in 1956, or indeed expressed in AI fictional characters such as HAL 9000 in Kubrick’s 2001: A Space Odyssey.
AI’s narrow machine learning is also doing very little to augment our own cognitive capacities. Recent government drives towards automation has meant people increasingly work and live in a 24-hour Society. These 24-hour lifestyle changes have placed huge demands of flexibility on the human body (Kreitzman and Foster, 2011). Instead of living diurnally (active in the day, resting at night) people are living in an always-on “now,” where priorities of the present dominate. Living in this state of what Douglas Rushkoff calls “present shock” means people have developed a distorted relationship to time. Financial traders no longer invest in futures but instead expect profits from computer algorithms. Citizens have no historical sense of how their governments function and demand immediate results from representatives. Children txt during an event to find out if there’s something better somewhere else (Rushkoff, 2013).
This is not to say that the idea of AI does not have great potential. Automating mechanical tasks has transformed society for millennia and is likely to continue to do so into the future (Innis, 2004). What needs to be carefully considered are the practical ramifications of 24/7 AI systems on people and society. Mobile phones are already negatively impacting the mental health and wellbeing of children (Carr, 2011). Meals consumed at night increase our risk of heart disease. Long-term shift work is sparking a raft of reproductive problems such as risks of miscarriage, retarded foetal development, and spontaneous abortion. Sleep loss is also triggering an epidemic in obesity, gut disorders, and drug addiction cycles as people try to maintain regular function (Kreitzman and Foster, 2011). Increased work-related accidents, number of sick-days taken, and family and marital stress are just some of the factors that will negatively impact our ability to succeed in the coming decades.
The reason AI systems are so damaging is simple. Unlike algorithmic systems we create to optimise work functions, humans are not computers that run software programs 24/7. We need vital environmental cues to synchronize our body’s biological rhythms to the Earth’s daily and annual cycles. When cues are disrupted due to erratic behaviors (disrupted eating and sleeping), we get ill. As neuroscientist Russell Foster explains:
“All of us in the developed world now live in a ‘24/7’ society. This imposed structure is in conflict with our basic biology. The impact can be seen in our struggle to balance our daily lives with the stresses this places on our physical health and mental well-being. We are now aware of this fundamental tension between the way we want to live and the way we are built to live”.
Tumblr media
Figure 1: Intelligence Amplification (IA)
It's becoming increasingly clear the most promising AI applications are not in algorithmic machines that authentically think like humans, but in harnessing technologies to enable human and computers to think better together, a field called Intelligence Amplification (IA) (Figure 1). IA has huge potential to allow us to make our cities and citizenry smarter than ever. However, recent developments are sophisticated enough to pose great risks if placed in the wrong hands, whether they be corrupt governments, corporations or both as is the case in 21st century politics (Müller and Bostrom, 2016). To mitigate risks, we must establish clear legislation and ethical guidelines for information technology (IT) professions, so future intelligence systems enable a more enlightened and civilized society (Berman and Cerf, 2017). 
Indeed, ethical guidelines for IT professions have already been established in some but not all countries. Dr. Eike-Henner Kluge authored 11 principles for the American Health Information Management Association (AHIMA) which have been adapted by the British Computer Society (BCS) and UK Council for Health Informatics Professions (UKCHIP). The European Federation for Medical Informatics (EFMI) does not explicitly state any code, but is a member of the International Medical Informatics Association (IMIA) (Samuel and Zaiane, 2014). 
Despite various adaptions, all codes converge around four key principles:
Public Interest (i.e., the need to maintain regard for public health, privacy, security and wellbeing of others and the environment; and to promote inclusion and equal access to IT)
Professional Integrity (i.e., the need to undertake work that reflects professional competence; continue to respect, develop, and share knowledge; and to comply with legislation)
Duty to Relevant Authority (i.e., the need to carry out professional responsibilities with care and diligence, in accordance with the Relevant Authority's requirements)
Duty to the Profession (i.e., the need to accept personal duty to uphold the reputation of the profession and not take any action which could bring the profession into disrepute)
Wearable computing pioneer Steve Mann has also spent many years developing a code of ethics on human augmentation which has resulted in three fundamental "Laws". These include: (i) the right to know when and how you are being monitored in the real and virtual world; (ii) the right to monitor the systems or people monitoring you and use that information in crafting your own digital identity; and (iii) the user should be able to understand the world they are in immediately (Mann et al., 2016). 
While the above codes are an important first step toward mitigating risks of human enhancement and AI, the challenge is they focus on enabling public access to data and professional integrity to the exclusion of all else. While both factors are necessary, they do not nurture the kind of intelligences humanity needs to thrive and prosper. To address these blind spots, this paper draws on recent evidence that three human factors (chronobiology, collaboration, creativity) are vital to humanity's future, and that harnessing them will ensure our IT professionals design more life-supporting systems. The 3 "Laws" presented as Legislation and Ethical Guidelines for Intelligence Technologies (LEGIT) aim to stimulate critical debate on the subject and nudge the sector to take practical and meaningful action.
Law I: Protect chronobiology
All technologies must provide humans with 24-hour temporal reference points to help them measure their progress, ambitions, and actions (Figure 2). Integration of temporal factors in technologies will remind humans they exist in a physical body, and that circadian clocks, which display 24-hour periodicity, control nearly all biological patterns, including brain-wave activity, sleep-wake cycles, body temperature, hormone secretion, blood pressure, cell regeneration, metabolism and behaviour (Kreitzman and Foster, 2011).
During working hours, humans have a basic right to know when and how organizations are tracking their chronobiology, and reciprocally monitor the chronobiology of organizations. During evenings, weekends, and holidays, humans have the right to disconnect from being monitored, and reconnect with people and groups that matter to them, such as family and friends. All human monitoring and communication must be limited to working hours to support optimal sleep/wake cycles and longevity (Kreitzman and Foster, 2011).
Tumblr media
Figure 2: Protect 24-hour human chronobiology
Law II: Integrate collaboration
Smart cyber-physical systems offer humans the ability to create and share goods at near-zero marginal cost (Rifkin, 2014). This post-capital shift to what some call the “the sharing economy” or “zero marginal cost society” is estimated to be worth $4.5 trillion by 2030 (Lacy and Rutqvist, 2016). To maximise the potential of this shift and overcome current challenges, organizations will need to reward creative collaboration between citizens and incentivize sustainability (Rifkin, 2014, Lacy and Rutqvist, 2016).
To achieve this, future technologies must integrate radical human collaboration into every stage of the development cycle (Figure 3). Prioritizing creative diversity will ensure technologies are less contaminated by cognitive bias, which will boost human skills and knowledge to result in breakthrough innovations (Page, 2008). Diverse collaboration will also ensure systems are systemic in nature, addressing root causality of problems rather than changing parts of the whole (Snowden and Kurtz, 2003).
Tumblr media
Figure 3: Integrate human collaboration at every stage of development
Law III: Nurture creativity
Highly desirable metatrait of creativity (aka social effectiveness) is central to determining human physiological, reproductive, and socioeconomic success (Rushton and Irwing, 2011, Cloninger, 2013, Musek, 2007). The three underlying traits that give rise to creativity have various labels, however they tend to reflect common characteristics related to Dynamism (self-expression, openness), Emotionality (self-awareness, self-transcendence), and Stability (self-efficacy, self-regulation).
For humans to thrive and prosper, technologies must nurture creative adaptiveness (Figure 4), to ensure everyone can reap its physiological, reproductive, and socioeconomic benefits (Rushton and Irwing, 2011, Cloninger, 2013, Musek, 2007). Nurturing creative adaptiveness across all levels of society also has the potential to solve many of the 21st century’s most complex problems (De Beule and Nauwelaerts, 2013), and thus mitigate some of challenges posed by AI (Brundage, 2015).
Tumblr media
Figure 4: Nurture human creative adaptiveness traits
Technologist Pledge
As technologist and member of the technology profession:
I WILL RESPECT & MAINTAIN the health, autonomy, and dignity of people and communities;
I WILL PRACTICE in accordance with the 3 Laws outlined in the LEGIT to maximise outcomes in human chronobiology, human collaboration, and human creativity;
I WILL NOT PERMIT considerations of age, ethnicity, gender, nationality, sexual orientation, or any other factor to intervene between my collaborative work with people;
I WILL ATTEND TO my own health and abilities to ensure my work is of the highest standard;
I WILL NOT USE my technological knowledge to violate human rights, even under threat; and
I WILL RESPECT & SHARE knowledge for the betterment of people and technology.
References
BERMAN, F. & CERF, V. G. 2017. Social and ethical behavior in the internet of things. Communications of the ACM, 60, 6-7.
BRUNDAGE, M. 2015. Taking superintelligence seriously: Superintelligence: Paths, dangers, strategies by Nick Bostrom (Oxford University Press, 2014). Futures, 72, 32-35.
CAMPOLO, A., SANFILIPPO, M., WHITTAKER, M. & CRAWFORD, K. 2017. AI Now 2017 Report. AI Now Institute at New York University.
CARR, N. 2011. The shallows: what the Internet is doing to our brains, WW Norton.
CLONINGER, C. R. 2013. What makes people healthy, happy, and fulfilled in the face of current world challenges? Mens Sana Monographs, 11, 16.
DE BEULE, F. & NAUWELAERTS, Y. 2013. Innovation and creativity: pillars of the future global economy, Edward Elgar Publishing.
EFRON, B. & HASTIE, T. 2016. Computer age statistical inference, Cambridge University Press.
GOPNIK, A. 2017. Making AI more human. Scientific American, 316, 60-65.
GUSZCZA, J., LEWIS, H. & EVANS-GREENWOOD, P. 2017. Cognitive collaboration why humans and computers think better together. Deloitte Review.
INNIS, H. A. 2004. Changing concepts of time, Rowman & Littlefield.
KREITZMAN, L. & FOSTER, R. 2011. The rhythms of life: the biological clocks that control the daily lives of every living thing, Profile Books.
LACY, P. & RUTQVIST, J. 2016. Waste to wealth: the circular economy advantage, Springer.
MANN, S., LEONARD, B., BRIN, D., SERRANO, A., INGLE, R., NICKERSON, K., FISHER, C., MATHEWS, S. & JANZEN, R. 2016. Code of Ethics on Human Augmentation. VRTO Virtual & Augmented Reality World Conference + Expo.
MÜLLER, V. C. & BOSTROM, N. 2016. Future progress in artificial intelligence: a survey of expert opinion. In: MÜLLER, V. C. (ed.) Fundamental Issues of Artificial Intelligence. Cham: Springer International Publishing.
MUSEK, J. 2007. A general factor of personality: evidence for the Big One in the five-factor model. Journal of Research in Personality, 41, 1213-1233.
PAGE, S. E. 2008. The Difference: how the power of diversity creates better groups, firms, schools, and societies, Princeton University Press.
RIFKIN, J. 2014. The zero marginal cost society: the internet of things, the collaborative commons, and the eclipse of capitalism, St. Martin's Press.
RUSHKOFF, D. 2013. Present shock: when everything happens now, Penguin.
RUSHTON, P. & IRWING, P. 2011. The general factor of personality: normal and abnormal. In: CHAMORRO-PREMUZIC, T., VON STUMM, S. & FURNHAM, A. (eds.) Wiley-Blackwell handbook of individual differences. Wiley-Blackwell.
SAMUEL, H. W. & ZAIANE, O. R. 2014. A repository of codes of ethics and technical standards in health informatics. Online J Public Health Inform, 6, e189.
SNOWDEN, D. & KURTZ, C. F. 2003. The new dynamics of strategy: sense-making in a complex and complicated world. IBM Systems Journal, 42, 35-45.
SOLOMONOFF, R. J. 1985. The time scale of artificial intelligence: reflections on social effects. Human Systems Management, 5, 149-153.
TOPOL, E. J. 2015. The patient will see you now: the future of medicine is in your hands, Tantor Media.
Grant Munro is director of London’s Digital Health Advisory Board and honorary academic at the National Institute of Health Innovation, University of Auckland, New Zealand. He is cofounder of the Innovation Party, Britain’s first political movement dedicated to fostering agile governance through peer-to-peer networks. His health blogs Shockism.com and Luman.life focus on charting frontier health technologies to help people get the most from life. He can be reached at Medium, Twitter, Facebook, GrantMunro.com or via email at [email protected].
0 notes