#Datafication in Health
Explore tagged Tumblr posts
Text
The Transformative Power of Datafication in Healthcare

In recent years, the healthcare sector has undergone a revolutionary transformation through datafication — the conversion of diverse healthcare elements into digital data. This shift to a data-driven healthcare ecosystem is reshaping the landscape, enhancing decision-making, personalizing treatments, and improving patient outcomes. In this article, we delve into the significance of datafication in health, its transformative effects, and the benefits it brings to both patients and the industry.
The Rise of Datafication in Health
Healthcare, inherently data-rich, historically grappled with analog and paper-based formats, impeding effective analysis. The digital revolution introduced electronic health records (EHRs) and digital systems, enabling the structured collection, storage, and analysis of health data.
Transforming Health Data into Actionable Insights
Datafication empowers healthcare providers to convert raw health data into actionable insights using advanced analytics and machine learning algorithms. This enables evidence-based decision-making, influencing treatment plans, operational efficiencies, resource allocation, and public health strategies.
Personalized Medicine and Treatment
Datafication facilitates personalized medicine by analyzing individual patient data, tailoring treatment plans based on genetic makeup, lifestyle, and medical history. This approach enhances treatment effectiveness while minimizing side effects.
Predictive Analytics for Disease Prevention
Datafication, through predictive analytics, identifies potential health risks and diseases early by analyzing historical health data. This proactive intervention improves outcomes and reduces healthcare costs.
Benefits of Datafication in Health
The integration of datafication in health yields benefits across patient care, research, innovation, and resource allocation:
Enhanced Patient Care and Outcomes: Real-time monitoring through datafication enables timely interventions, resulting in better medical treatment and improved health outcomes.
Research and Innovation: The vast pool of health data supports research-driven innovations and advancements in healthcare.
Efficient Resource Allocation: Datafication aids in optimizing resource allocation, reducing costs, and increasing operational effectiveness.
The Role of Data in Healthcare
The pivotal role of data in healthcare includes informed decision-making, personalized medicine, research and innovation, healthcare operations and efficiency, healthcare policy and planning, telemedicine and remote monitoring, early disease detection and prevention, quality improvement and outcome monitoring, patient engagement and empowerment, and population health management.
Datafication is reshaping the future of healthcare by harnessing data’s power to drive informed decision-making, improve patient outcomes, and enhance operational efficiencies. As technology advances, embracing datafication becomes crucial in realizing a personalized, efficient healthcare ecosystem focused on delivering the best care possible.
0 notes
Text
You're not being paranoid. If you always feel like somebody's watching you, as the song goes, you're probably right. Especially if you're at work.
Over the course of the Covid-19 pandemic, as labor shifted to work-from-home, a huge number of US employers ramped up the use of surveillance software to track employees. The research firm Gartner says 60 percent of large employers have deployed such monitoring software—it doubled during the pandemic—and will likely hit 70 percent in the next few years.
That's right—even as we've shifted toward a hybrid model with many workers returning to offices, different methods of employee surveillance (dubbed "bossware" by some) aren't going away; it's here to stay and could get much more invasive.
As detailed in the book Your Boss Is an Algorithm, authors Antonio Aloisi and Valerio de Stefano describe "expanded managerial powers" that companies have put into place over the pandemic. This includes the adoption of more tools, including software and hardware, to track worker productivity, their day-to-day activities and movements, computer and mobile phone keystrokes, and even their health statuses.
This can be called "datafication" or "informatisation," according to the book, or "the practice by which every movement, either offline or online, is traced, revised and stored as necessary, for statistical, financial, commercial and electoral purposes."
Ironically, experts point out that there's not sufficient data to support the idea that all this data collection and employee monitoring actually increases productivity. But as the use of surveillance tech continues, workers should understand how they might be surveilled and what, if anything, they can do about it.
What Kind of Monitoring Is Happening?
Using surveillance tools to monitor employees is not new. Many workplaces continue to deploy low-tech tools like security cameras, as well as more intrusive ones, like content filters that flag content in emails and voicemails or unusual activity on work computers and devices. The workplace maxim has long been that if you're in the office and/or using office phones or laptops, then you should never assume any activity or conversation you have is private.
But the newer generation of tools goes beyond that kind of surveillance to include monitoring through wearables, office furniture, cameras that track body and eye movement, AI-driven software that can hire as well as issue work assignments and reprimands automatically, and even biometric data collection through health apps or microchips implanted inside the body of employees.
Some of these methods can be used to track where employees are, what they’re doing at any given moment, what their body temperature is, and what they’re viewing online. Employers can collect data and use it to score workers on their individual productivity or to track data trends across an entire workforce.
These tools aren't being rolled out only in office spaces, but in work-from-home spaces and on the road to mobile workers such as long-haul truck drivers and Amazon warehouse workers.
Is This Legal?
As you might imagine, the laws of the land have had a hard time keeping up with the quick pace of these new tools. In most countries, there are no laws specifically forbidding employers from, say, video-monitoring their workforce, except in places where employees should have a “reasonable expectation of privacy,” such as bathrooms or locker rooms.
In the US, the 1986 Electronic Communications Privacy Act laid out the rule that employees should not intercept employee communication, but its exceptions—that they can be intercepted to protect the privacy and rights of the employer or if business duties require it, or if the employee granted prior permission—make the law toothless and easy to get around.
A few states in the US require employers to post notice if they are electronically monitoring people in the office, and there are some protections for the purpose of collective bargaining, such as discussing unionizing.
In February, US Democratic senators led by Bob Casey of Pennsylvania moved to introduce legislation to curtail workplace monitoring by employers. It would require bosses to better notify employees of on- and off-duty surveillance and would establish an office at the US Department of Labor to track work monitoring issues.
What You Can Do
Privacy experts say that unfortunately for many employees, the only recourse for a worker who doesn't like a company's surveillance policies is to find another job.
Short of that, employees can make a formal request for disclosure of a company's data collection and surveillance policies, typically from the human resources department. Such policies may be outlined in an employee handbook, but also may not be readily available, especially for smaller companies and startups. Workers who are part of a workers' union can request the information through their representatives.
A company may not know it is required to post that it's surveilling employees or that it is in a state where two parties must consent to phone-conversation monitoring. You could choose to let your company know it's not in compliance, and if the company doesn't make changes (and you’re in the United States), you could alert your state's workforce commission or file a complaint with the US Occupational Safety and Health Administration or over HIPAA (Health Insurance Portability and Accountability Act) medical privacy issues.
Apart from all that, general data hygiene is also a good counter. Clear your browser cache regularly, and don't keep private data on work devices or transmit them over work email accounts. Block your workstation's webcam when it's not in use (if you're allowed to do that) and ask your employers if you can opt out of surveillance tools that are not required for your work.
Most importantly, be mindful when your employer issues notices about workplace privacy changes or when new software or hardware is introduced for the purposes of monitoring. Ask questions and research what these tools are if you don't get a good explanation from your bosses.
18 notes
·
View notes
Text
10 Breakthrough Technologies & Their Use Cases in 2023
Today's technology is developing quickly, enabling quicker changes & advancements and accelerating the rate of change.
For instance, the advancements in machine learning (ML) and natural language processing (NLP) have made artificial intelligence (AI) more common in 2023, as part of a digital transformation solutions.
Technology is still one of the main drivers of global development. Technological advancements provide businesses with greater opportunities to increase efficiency and develop new products.
Business leaders can make better plans by keeping an eye on the development of new technologies, foreseeing how businesses might use them, and comprehending the factors that influence innovation and adoption, even though it is still difficult to predict how technology trends will pan out.
Here are the top 10 emerging technology trends you must watch for in 2023.1. AI that creates graphics and assists with payment
The year of the AI artist is now. With just a few language cues, software models created by Google, OpenAI, and others can now produce beautiful artwork.
You may quickly receive an image of almost anything after typing in a brief description of it. Nothing will ever be the same.
A variety of industries, including advertising, architecture, fashion, and entertainment, now employ AI-generated art.
Realistic visuals and animations are made using AI algorithms. Also, new genres of poetry and music are being created using AI-generated art.
Moreover, AI will simplify the purchasing and delivery of products and services for customers.
Nearly every profession and every business function across all sectors will benefit from AI.
The convenience trends of buy-online-pickup-at-curbside (BOPAC), buy-online-pickup-in-store (BOPIS), and buy-online-return-in-store (BORIS) will become the norm as more retailers utilize AI to manage and automate the intricate inventory management operations that take place behind the scenes. 2. Progress in Web3
Also, 2023 is witnessing a huge advancement in blockchain technology as businesses produce more decentralized products and services.
We now store everything on the cloud, for instance, but if we decentralized data storage and encrypted that data using blockchain, our information would not only be secure but also have novel access and analysis methods.
In the coming year, non-fungible tokens (NFTs) will be easier to use and more useful.
For instance, NFT concert tickets may provide you access to behind activities and artifacts.
NFTs might represent the contracts we sign with third parties or they could be the keys we use to engage with a variety of digital goods and services we purchase. 3. Datafication
The breakthroughs described in the list of technological trends for 2023 will inevitably lead to the datafication of many businesses.
The act of converting or changing human jobs into data-driven technology is referred to as the process.
It is the first important development toward a fully data-driven society. Other branches of the same customer-centric analytical culture include workforce analytics, product behavior analytics, transportation analytics, health analytics, etc.
Due to the vast number of linked Internet of Things (IoT) devices, it is possible to analyze a company's strengths, weaknesses, risks, and opportunities using a greater number of data points.
According to Fittech, when the market for datafying sectors surpasses $11 billion in 2022, it is evolving into a profitable business model. 4. Certain aspects of the Metaverse will become actual
The term "metaverse" has evolved to refer to a more immersive internet in which we will be able to work, play, and interact with one another on a persistent platform.
According to experts, the metaverse will contribute $5 trillion to the world economy by 2030, and 2023 is the year that determines the metaverse's course for the next ten years.
The fields of augmented reality (AR) and virtual reality (VR) will develop further.
In the coming year, avatar technology will also progress. If motion capture technology is used, avatars will even be able to mimic our body language and movements. An avatar is a presence we portray when we interact with other users in the metaverse.
Further advancements in autonomous AI-enabled avatars that can represent us in the metaverse even when we aren't signed in to the virtual world may also be on the horizon.
To perform training and onboarding, businesses are already utilizing metaverse technologies like AR and VR, and this trend will pick up steam in 2023. 5. Bridging the digital & physical world
The digital and physical worlds are already beginning to converge, and this tendency will continue in 2023. This union consists of two parts: 3D printing and digital twin technologies.
Digital twins are virtual models of actual activities, goods, or processes that may be used to test novel concepts in a secure online setting.
To test under every scenario without incurring the enormous expenses of real-world research, designers, and engineers are adopting digital twins to replicate actual things in virtual environments.
We are witnessing even more digital twins in 2023, in everything from precise healthcare to machinery, autos, and factories. This is a part of the best digital transformation solutions in this new era.
Engineers may make adjustments and alter components after testing them in the virtual environment before employing 3D printing technology to produce them in the actual world. 6. More human-like robots are coming
Robots will resemble humans even more in 2023, both in terms of look and functionality.
These robots will serve as event greeters, bartenders, concierges, and senior citizens' companions in the real world.
While they collaborate with people in production and logistics, they will also carry out complicated duties in factories and warehouses.
One business, Tesla, is working hard to develop a humanoid robot that will operate in our homes.
Two Optimus humanoid robot prototypes were unveiled by Elon Musk, who also stated that the business will be prepared to accept orders in the next few years.
The robot is capable of carrying out simple duties like watering plants and lifting objects. 7. Digitally Immune Systems
The launch of the Digital Immune System must be included in any list of technological trends for 2023.
This system alludes to an architecture made up of techniques taken from the fields of software design, automation, development, operations, and analytics. By eliminating flaws, threats, and system weaknesses, it tries to reduce company risks and improve customer satisfaction.
The significance of DIS resides in automating the many components of a software system to successfully thwart virtual attacks of every description.
According to Gartner, businesses that have already implemented DIS will reduce customer downtime by around 80% by 2025.
So, if you are looking for the best digital transformation services company to introduce digital immune systems, TransformHub is here to guide you. 8. Genomics
Genomic research has improved our grasp of life and contemporary health analytics while also advancing our understanding of brain networks.
In the upcoming years, fast-developing technologies such as scarless genome editing, pathogen intelligence, and NGS data analysis platforms will use AI to interpret hidden genetic codes and patterns, elevating genomic data analysis and metagenomics to the top positions in the biotech sector.
Functional genomics, which uses epigenome editing to reveal the influence of intergenic areas on biological processes, is becoming more prevalent in 2023 technology trends. 9. CRISPR
The gene-editing technology, CRISPR, has quickly moved from the lab to the clinic during the past ten years.
Clinical trials for common illnesses, such as excessive cholesterol, have lately been included. It originally started with experimental therapies for uncommon genetic abnormalities and might advance things much further with new variants.
Due to its ease of usage, CRISPR is quickly becoming a common technology employed in many cancer biology investigations.
Moreover, CRISPR is entirely adaptable. It is more accurate than existing DNA-editing techniques and can essentially modify any DNA segment within the 3 billion letters of the human genome.
The simplicity of scaling up CRISPR is an additional benefit.
To control and analyze hundreds or thousands of genes at once, researchers can utilize hundreds of guides RNAs. This kind of experiment is frequently used by cancer researchers to identify genes that might be potential therapeutic targets. 10. Growth of Green Technology
Climate change is a fact. It is a rising issue that disturbs governments and society at large and poses a threat to human health and the environment.
The use of so-called green technology is one method of combating global warming.
Globally, scientists and engineers are working on technical solutions to reduce and get rid of everything that contributes to climate change and global warming.
Here are some incredible uses for the same:
Emissions reduction
Waste-to-Energy
Management of waste and recycling
Biofuels
Treatment of wastewater
Solar power
Tidal and wave power
Green vehicles
Smart structures
Farms and gardens in the air
TransformHub: Keeping Ahead of Technological Trends
These innovations have the power to completely alter the way we live, work, and interact. It's critical to be informed about these changes and take their effects into account.
The epidemic has sped up the necessary industry-wide human-AI collaboration and it looks like 2023 will be the year we catalyze this cooperation into some truly extraordinary inventions.
For more information on how contemporary automation and AI are fusing all the defining industries of our era into a single data-driven civilization, stay up-to-date with one of the best digital transformation companies in Singapore, TransformHub.
We take complete accountability to digitally transform your business by providing precisely tailored solutions based entirely on your requirements.
Let’s connect and bring your vision to life!
2 notes
·
View notes
Text
pleasure and protest
An essay about Covid-19 and the quarantine by Paul Preciado, published in early May in Artforum, concludes with a remarkably prescient sentiment:
It is imperative to modify the relationship between our bodies and biovigilant machines of biocontrol: They are not only communication devices. We must learn collectively to alter them. We must also learn to de-alienate ourselves. Governments are calling for confinement and telecommuting. We know they are calling for de-collectivization and telecontrol. Let us use the time and strength of confinement to study the tradition of struggle and resistance among racial and sexual minority cultures that have helped us survive until now. Let us turn off our cell phones, let us disconnect from the internet. Let us stage a big blackout against the satellites observing us, and let us consider the coming revolution together.
When I first read it a month ago, it seemed far-fetched to me. It struck me as the kind of tacked-on rallying-cry conclusion that many critical essays end with, sounding a note of hope when their critique otherwise suggests the futility of resistance. But now it seems as though ”the time and strength of confinement” has actually turned into a surprisingly broad commitment to “study the tradition of struggle and resistance among racial and sexual minority cultures that have helped us survive until now” for those thousands of people now joining protests whose tone has been set and adopted from Black Lives Matter and other police- and prison-abolition movements. It can appear as though the “coming revolution” has indeed come, and de-alienation is taking place night after night in the streets.
But that development doesn’t seem to have followed from Preciado’s plea that we “turn off our cell phones” and "disconnect from the Internet.” The uprising is not currently shaping up as a unified resistance to technology; rather it has manifested as a collective rejection of racist policing and all the societal manifestations of structural racism more broadly. That’s not to say that contemporary technology is not deeply implicated in sustaining and extending racism. The webs of surveillance it facilitates makes possible not only the old forms of discrimination and targeted oppression but new forms of embedded, infrastructural racism, whether that is a matter of the racist search results Safiya Umoja Noble details in Algorithms of Oppression, the systematic misidentifications of facial recognition technology that Joy Buolamwini has detailed, or the ways race is encoded and reified and leveraged, as Ruha Benjamin outlines in Race After Technology. Day after day, Chris Gilliard’s Twitter feed documents the tech industry’s complicity in structural discrimination and racist policing. Especially egregious are “neighborhood watch” platforms like Nextdoor, which are vectors for racist intimidation, and surveillance systems like Amazon’s Ring, which have proliferated through the company’s partnerships with police departments.
So Preciado’s implied sequence of events seems backward: Our relationship to “biovigilant machines of biocontrol” — a.k.a. phones — begins to change when our relationship to resistance and liberation struggles changes first. (And then changes in relationships to technology feed into protest tactics and strategy, and so on.)
For now, tech companies seem like they are on the defensive: For instance, IBM, Microsoft, and Amazon have been pushed (thanks in part to the researchers cited here) to abandon their development of facial recognition technology or temporarily halt its sale to police departments. Some workers at companies like Facebook have questioned their roles in fomenting fascism and racism. Yet it is also easy to imagine that tech companies will try to capitalize on any progress toward police abolition by proposing as alternatives its surveillance-driven forms of predictive policing and pre-emptive discrimination (like “cashless stores” which effectively prescreen customers, and other tech-driven forms of “targeting” that allow businesses to shop for customers). All the many forms of algorithmic screening will likely be touted as useful planks in efforts to “defund the police” by automating the police’s current function of enforcing modes of segregation and unevenly distributed economic exploitation. In Cloud Ethics, Louise Amoore details how companies have tried to sell AI tools to police departments that would, for instance, anticipate protests or identify targets for ICE by scanning social media and other forms of location data and network activity. These tools are marketed as police aids but they could be repositioned as automating the police away. Of course, this would not solve the problems presented by policing, but encode them in systems that would be just as impervious to change, abetted by the false sense of computational neutrality.
It will likely require sustained protest and pressure to prevent tech companies from putting forward their usual methods (datafication, surveillance, solutionism, regulatory capture) that their business models demand. “Decollectivization and telecontrol” will certainly be attempted to contain the protests, even if they did not necessarily spark them.
In part, Preciado’s essay focuses on ideas of immunization as protection, as exemption from risks others are made to bear, and how these kinds of exclusions become the basis for communities. "The management of epidemics stages an idea of community, reveals a society’s immunitary fantasies, and exposes sovereignty’s dreams of omnipotence—and its impotence,“ he writes. (This makes me think now of the “qualified immunity” that U.S. police are granted to protect them from legal accountability for their actions, as well as how Nextdoor permits neighborhoods to defend their whiteness.)
Epidemics are “sociopolitical constructions rather than strictly biological phenomena.” They don’t unfold according to some script dictated by a virus’s level of contagiousness; they enter into existing social relations and present an occasion for their rearticulation. Thus, Preciado argues, “the virus actually reproduces, materializes, widens, and intensifies (from the individual body to the population as a whole) the dominant forms of biopolitical and necropolitical management that were already operating over sexual, racial, or migrant minorities before the state of exception.” With Coivd-19, this is evident in the how white people have been disproportionately less affected, an index of their relative privilege. The refusal among white people to wear masks reflects and celebrates this privilege as well, which helps explain why health officials who recommend masking have been harassed and threatened by white mobs.
Similarly, “cures” for diseases don’t proceed inevitably to those who need them; they aren’t distributed any more evenly than power, wealth, or opportunity. They too must first reannounce the existing power relations, which delineate who deserves to become “well” or immune and who should be lastingly pathologized. (If a cure threatened existing power relations, those in power would seek to suppress it.)
For Preciado, the social course of pandemics and “cures” reflect the more general logic of “pharmacopornographic” forms of control — “microprosthetic and media-cybernetic control” administered through communication technology and pharmaceuticals, visual and literal stimulants. As Foucault argued about power generally, these mechanisms of control are experienced not as restrictive but as subjectivity-granting, an expansion of pleasurable possibilities that secure the subjects’ assent. Preciado writes: “These management techniques function no longer through the repression and prohibition of sexuality, but through the incitement of consumption and the constant production of a regulated and quantifiable pleasure. The more we consume and the better our health, the better we are controlled.”
I’m often tempted by this line of analysis to treat all forms of pleasure with suspicion — anything proposed as “fun” is probably a thinly disguised form of social control, enjoyment of which establishes just how much my psyche has already been formatted by the apparatus of domination. It then follows that anything that makes me uncomfortable proves I’m engaging in a form of resistance. But that unsustainable line of thinking leads nowhere. The point is not to demonize pleasure but to explicitly politicize it, to engage in political practices that sustain a different kind of subjectivity that enjoys other kinds of joy. In this conversation with Zoé Samudzi, Vicky Osterweil explains:
One of the things that scares police and politicians the most when they enter a riot zone — and there are quotes from across the 20th century of police and politicians saying this — is that it was happy: Everyone was happy ... The playwright Charles Fuller, who happened to be a young man starting out his career during the Philadelphia riots of 1964 ... talks about the incredible sense of safety and joy and carnival that happens in the streets.
I think riots and militant violent action in general get slandered as being macho and bro-y, and lots of our male comrades like to project that sort of image. That definitely happens, but I actually think riots are incredibly femme. Riots are really emotive, an emotional way of expressing yourself. It is about pleasure and social reproduction. You care for one another by getting rid of the thing that makes that impossible, which is the police and property. You attack the thing that makes caring impossible in order to have things for free, to share pleasure on the street. Obviously, riots are not the revolution in and of themselves. But they gesture toward the world to come, where the streets are spaces where we are free to be happy, and be with each other, and care for each other.
This is the obverse of the pleasure in consumption and individuation that Preciado describes, which in his analysis is anchored in the technologies that allow us to consume in physical isolation at home like would-be Hugh Hefners in our multimedia-enabled “soft prisons,” adrift in a fantasy of dematerialized insubstantiation.
The subjects of the neoliberal technical-patriarchal societies that Covid-19 is in the midst of creating do not have skin; they are untouchable; they do not have hands. They do not exchange physical goods, nor do they pay with money. They are digital consumers equipped with credit cards. They do not have lips or tongues. They do not speak directly; they leave a voice mail. They do not gather together and they do not collectivize. They are radically un-dividual. They do not have faces; they have masks.
There seems to be a lot of fetishization of “real” communication implied here — again as if digital communication were the main obstacle preventing people from collectivizing their bodies for revolution. But the protests now seem to suggest that while consumerism may have been an obstacle (i.e. the right-wing talking point that the protests are popular because people can’t go shopping), digital technology, which many have been leaning on and living through more than ever under lockdown conditions, hasn’t been, at least not yet, and not in the ways Preciado is suggesting.
The threat posed by technology is not so much that it prevents people from having “real” encounters but that it can facilitate such encounters on terms that are already fully contained — imagine, for example, protests operating only within parameters deemed acceptable in advance by machine-learning simulations, or conversations that are pre-mediated to a degree that they can’t exceed the anticipated possibilities. Preciado is right that these experiences will be pleasurable; people generally take pleasure in being accommodated, from being recognized. But to detect the kinds of pleasure that are complicit with oppressive forms of social control, it is not enough to simply look for situations where screens are foregrounded and bodies are suppressed. It’s not enough to check our voice mail.
4 notes
·
View notes
Text
Audience Studies (3P18) Blog #1
Week 2 – Workplace Efficiency Seminar
About a month ago I was an able to participate in being an audience member for an online seminar that was being run by my work. The seminar’s purpose was to improve on efficiency in the workplace, as the department internationally was taking a negative turn in the expectations of production. The seminar was conducted by a published author and professional motivator, as well as a few guests of his own. Due to restrictions, the seminar was presented online, allowing the audience to participate from home. Given that the need for improvement was department-wide internationally, the size of the audience was large and mediated. Due to the size of the audience, measures such as muted microphones or passing of the mic had to be taken to ensure that there were no interruptions or issues with the flow of the presentation. I found the format of having leaders in conversation to be beneficial in this setting, as things like “raising your hand” on Microsoft teams can be disrupting to the flow of thoughts or the presentation. Even though these rules are small, they were effective in mediating such a large audience. As discussed in lecture, retention of information has drastically changed throughout history. Since this seminar was put in place as a source to take from to improve our own skills, we were required to take notes in order to be able to retain information. For example, we were specifically asked to highlight notes about teambuilding, something that the department has lacked a focus in due to the individuality of the work. Through taking notes and highlighting the important concepts, the audience will be able to remember the importance of building on this aspect of the job, as well as ways to do so.
Looking back to ideas presented in the textbook, this audience experience can be compared to any like it in the past. Unlike early oral audience experiences, this event was not colocated in time and space due to certain reasons. These reasons include the fact that it would not be cost efficient to fly out international members of this department in order to attend a seminar together. The use of an online format allowed the audience to attend the event at the comfort of their own home, making schedules easier for both the company and the employees. The main reason an in-person event was not able to occur was due to the current epidemic. Due to this reason, the health and safety of employees is prioritized leaving the only possible method of having the event is through an online format. This audience experience could have differed if the virus was not an obstacle through allowing a physical gathering, something that could have drastically changed the experience for better or worse depending on the person. For example, some audience members would have felt significantly more engaged through a style of physical interaction between the speaker and themselves as opposed to through a screen. Colocation where there is both the space and time aspects could have led to better results from employees, as having both factors of the concept could have left a better memory of the event and its ideas. Another concept we can refer to is the evolution of communication tools in our society. For the sake of this audience experience, the evolution towards the capability of real-time video calls is what allowed for the success of the event. Through this communication tool, the audience was allowed to see the speakers face, as well as see any demonstrations they give through actions. Another aspect of this tool is the capability of screen sharing in real-time. This allowed for the audience to follow along with the aid of a PowerPoint presentation. As mentioned earlier, the anonymity of the polls was able to avoid any passiveness or hesitance to participate among the audience. Seeing the results of these polls on screen allowed audience members to view the data together, getting a sense of what other employees are experiencing while having it analyzed by a professional. Using Sullivan’s trilogy, outcome can be seen in this audience experience through what happens as a result of hosting the seminar for employees. Mass is present through the fact that only a few key speakers have their cameras on, giving the audience no knowledge of the other members within it. Agent is seen in how the audience members use the information or “texts” being given while it is being presented, as well as how they choose to interpret it and apply it after. The power that the speaker had on this audience will be shown through how the efficiency in the department changes moving forward. In the Livingstone article “Audiences in an Age of Datafication: Critical Questions for Media Research”, one of the main focuses discussed by the author is about the datafication of audiences. The concepts presented in this section on the use of data from audiences to determine hidden patterns can be applied to my own experience. Through the use of anonymous polls, the speaker was able to gather live data on what the employees truly thought. As mentioned in the article, this strategy moves past a simple form of data such as number of comments or content of the comments that are publicly displayed. This data was not only beneficial for the speaker and audience to analyze, but the company as well. The company can use this data to look for ways to improve on their own end in the matter, helping work towards the general goal together with the employees.
1 note
·
View note
Text
COMM2126 Digital Media and the Senses Task 5 [me19ag]
For this week’s task I am disrupting the data tracking on my wearable device and analysing its impact on me and the services that I receive. I am wearing my Apple Watch for self-knowledge but it’s interesting to think that, just as Crawford (2015) points out, in this age of networked connectivity, the system that is supposed to serve me becomes a tool for many other intermediaries. The data I get is a personalised report but the system around them is built for mass collection and analysis.
The Apple Watch provides activity tracking consisting of 3 rings: move, exercise, and stand.

The first possible alteration is obvious, as users can set their own goals. If I were to change them to very small values I could easily reach a goal of exercising for, let’s say, one minute a day, instead of my current goal of 60 minutes/day. However, I wanted to see how I can cheat my Apple Watch into thinking I reached my real daily goals.
To alter the status of my rings, I used the Health app on my iPhone. This app offers great health insights via your Apple Watch and connected third-party apps and it also allows you to enter health and fitness data manually. To make the data stick, you have to enter it as a workout.

In the workouts section you can easily Add Data, where it asks you for details like the activity type, the number of kilocalories burned and the duration of your activity. The kilocalories you enter are your Active Calories and will be directly added to your Move ring once your Workout is saved. The Move ring is easily tricked by adding data manually.
The data manipulation on an Apple Watch does however have its limitations, as you can’t control your step count nor will it be affected by exercise data you manually enter. Also, you can't manually control your standing hours like you can exercise minutes or calorie burn. Standing hours will be counted accordingly for fake workouts. You can also go back and delete the falsified data by accessing your data log.
Another option of cheating your Apple Watch I have read about are: waving your wrist (like crazy), similar to the Unfit-Bit project. Your watch will assume you're moving and will tack on points to your step count, Move goal, Stand goal, even Exercise minutes if you do it long enough. Or you can just hold your arm up, it supposedly earns you hours towards your stand goal. Another way is to pretend you are someone else and maximise your activity - you can do this by altering your personal data (age, sex, height, and weight).
The impact these falsified data has is obvious in the average sections, which provides charts of my activity in the past 7 days, month or year. So you can very easily make your wearable device think you are a more active, healthier person than you actually are.
My hacking changes how the service represents and communicates with me - during a normal day I would get regular reminders to stand up and move for a minute. Let’s not forget about the little notifications saying “You can still make it!”, encouraging me to workout a little before the day is over. They stop if my tracking device thinks that I have done enough for the day.
References:
Crawford, L. 2015. Our metrics, ourselves: A hundred years of self-tracking from the weight scale to the wrist wearable device. European Journal of Cultural Studies. 18(4-5), pp. 479–496.
Fritsch, K. 2018. Towards an Emancipatory Understanding of Widespread Datafication.
Keller, J. 2015. Apple fitness chief Jay Blahnik talks up the Apple Watch's fitness advantages. Available from: https://www.imore.com/apple-fitness-chief-jay-blahnik-talks-apple-watchs-fitness-advantages
Peterson, J. 2018. How to cheat your Apple Watch Rings. Available from: https://ios.gadgethacks.com/how-to/cheat-your-apple-watch-rings-0191261/
Rosenfield, S. 2015. Apple’s Fitness Guru Opens Up About the Watch. [Online]. Available from: https://www.outsideonline.com/2006026/apples-fitness-guru-opens-about-watch
0 notes
Text
Deciphering Datafication in Data Mining: Revealing the Fundamental Ideas

In today’s data-centric landscape, the colossal and diverse data generated pose a formidable challenge for extracting meaningful insights. The interplay between datafication and data mining emerges as a crucial dynamic to unravel the potential knowledge hidden within this vast sea of data. This article delves into the realms of datafication, elucidating its significance in the Datafication in Data Mining and how it serves as a catalyst for gleaning valuable insights.
Understanding Datafication:
Datafication is the transformative process that digitizes real-world facets, from human behaviors to industrial operations, converting them into digital data. This conversion of analog or physical information into a digital format enables efficient storage, processing, and analysis by computer systems. This digital transformation spans various data types, encompassing text, images, audio, video, sensor readings, and more. Wearable devices, for instance, turn vital signs into digital data, offering insights into health trends and personalized healthcare.
Datafication: The Catalyst for Data Mining
Datafication in Data Mining, the process of extracting insights from large datasets, necessitates structured data for effective analysis. Datafication lays the foundation for successful data mining by transforming raw, diverse data into a structured or semi-structured format. This process involves data cleaning, integration, transformation, and reduction, ensuring the data is consistent and suitable for mining. The standardized digital format facilitates the application of various data mining techniques, unveiling valuable insights, patterns, and trends.
Key Steps in Datafication:
To comprehend the intricacies of datafication, several key steps are involved:
Data Collection: Gathering data from diverse sources, including databases, social media, sensors, and more.
Data Cleaning and Preprocessing: Detecting and correcting inconsistencies and errors to enhance data quality.
Data Integration: Merging disparate datasets into a unified format for easier analysis.
Data Transformation: Converting integrated data into a suitable format, including normalization and aggregation.
Data Reduction: Techniques to reduce data volume while preserving integrity, crucial for handling large datasets.
The Significance of Datafication in Data Mining:
Datafication holds paramount importance in the success of data mining, offering several key benefits:
Improved Data Quality: Rigorous data cleaning during datafication results in enhanced data quality, fundamental for accurate analysis.
Standardization and Consistency: Datafication transforms data into a standardized format, ensuring consistency and compatibility across various sources.
Enhanced Analysis: Prepared data enables effective application of diverse data mining techniques, revealing hidden patterns and trends.
Time and Cost Efficiency: Organized data from datafication streamlines the mining process, saving time and resources.
Facilitating Predictive Modeling: Structured data from datafication is crucial for building predictive models, aiding in forecasting and strategic decision-making.
Datafication in Data Mining serves as a pivotal bridge in the data mining journey, transforming raw, unstructured data into a format amenable to analysis. Emphasizing data quality, standardization, and enhanced analysis, datafication significantly contributes to the efficiency and success of data mining endeavors. As we navigate this data-driven era, mastering the art of datafication becomes imperative for extracting meaningful value from the wealth of available data, fostering knowledge and innovation.
0 notes
Text
Literature Reflection (Bridle & AI bias)
I found the podcasts by James Bridle in combination with the literature review on gender, race and power in AI give more in-depth information on topics that I have theoretically (as well as in person) touched the surface on before. Both works illustrate why it is important we mindfully use and design technology.Something that stuck with me is how less visible important institutions have become; my local bank has closed and is now almost fully operating digitally and many municipality cases can I handle online. What does this mean for societies' grip and understanding of them? Visibility and transparency are ground principles for and of our liberal and democratic system, so why not here? Visibility = responsibility: this ranges from the power relations visible in the internet cables that run under the oceans to tech companies making diversity reports publicly available.
I never realized how John Berger's theories on seeing art can be applied to modern day technology. Especially the radio analogy I find interesting; the same can be said for social media nowadays, where only a small percentage of its users produces content that is viewed by millions. it is often a one-way conversation which leaves its participants feeling isolated instead of conencted. This has become even more apparent during covid-19, where online friday drinks have not felt the same as in real life. Also, the power of tech companies have increased even more now more and more people are dependent on them. I have a feeling that the increase of living in this digital period will have a huge impact on the mental health of people. On the other hand, the digital realm has democratized information and discussions on this information, as there is a variety of free webinars, festivals and conferences available online, from the comfort of people's homes. This will in the end also democratize new tools and how we perceive the world around us. The way James Bridle described our relation to technology was in line with Donna Haraway's idea about living in the terrestrial. If we would see and care for technology as how we do certain animals, we would be able to re-evaluate what we can get from it. Bridle mentions that artificial intelligence can help us escape the Anthropocene and to reconnect ourselves to nature. Though he does not mention how. However, I thought of how our living world is progressively supplied with sensors and with the resulting data, and how we can gain insights into the complexity of the interdependencies between living organisms. For instance, sensors and the datafication of forests have laid bare the complex web of communications between trees. When researching I came across this TED Talk https://www.youtube.com/watch?v=pvBlSFVmoaw.
This mention of changing the way we connect to our technologies reminded me of the term automation bias; the urge of humans to favour suggestions from automated systems and to ignore contradictory information made without automation, even if it is correct. Especially in covid times, people have this idea of a 'technofix', which is based on a combination of trust in technology and limited trust in the ability, and the willingness, of humans to adapt their behaviour. We are looking for the fastest solution which will cause us to make the least amount of sacrifices; technology will fix our problem and we do not have to think about it any longer. A “quick fix” for the corona crisis, in the form of a vaccine, would quickly silence the debate on the structural causes of the pandemic and allow us to revert to our pre-corona practices in a heartbeat. Comparable to the way medication often takes away the necessity of aspiring to a healthier lifestyle. Because of this apparent lack of any human sacrifice, the idea of the techno-fix goes hand in hand with a feeling of guilt, as if, like in the myth of Prometheus, we really don’t deserve to use technology.
The crisis is slowly taking away our illusion of the tech fix. The essence of these (false) solutions is the illusion they create that we can “save” the climate without having to change our lifestyle. The underlying belief is that we’re not willing to make a sacrifice such as travelling less, for example, or reducing our total energy use. In fact, the main notion seems to be that human beings are not or barely able to adjust their behavior at all without the clear prospect of a reward. It would be interesting to make the climate crisis sensed evenly as immediately as current pandemic. This circles back to the notion that visibility calls for understanding, thus responsibility. As it is talked about in the Bridle podcasts: technological agency and climate change are both visual problems, or rather the lack of visibility. An artwork that succeeds in visually raising awareness for this is terra0, a forest that can autonomously sell its trees and eventually, using the accumulated capital, buy itself and become a self-owned economic unit. For now, it remains an artistic experiment designed to raise awareness, but in theory you could build such a program on the blockchain to make a forest represent itself.
For me, as a woman enrolled in a technologically-focused minor in a class in which the majority of the people identify as male, the text on gender, race and power in AI was really interesting and had contained some familiar frustrations. By connecting the unequal representation of women in the tech industry to and bias systems in AI, the author suggests two versions of the same problem. I find data violence, which enacts forms of administrative power which affects some of us more than others, a relevant modern day problem. In a world in which data and facts reign and where systems are trained upon existing data sets, representation is of uttermost importance. The authors stresses that, because AI systems play a important role in our political institutions (like healthcare), we need to re-asses the relationship between workplace diversity crisis and the problems with bias and discrimination in AI. In a future and ideal world, a supervising board would examine the politics of the design of such a system. It would check how a system was constructed and whose interests shaped the metrics for success or failure.
Understanding 'bias' in data requires accounting of the social context through which the data was produced: how humans make data in context. It is also interesting to note that companies also use data violence to shape reports on diversity to their wishes. Only accounting the 80% of the full-time workforce is data manipulations with major implications and should in my eyes therefore be considered a crime or at least punished. Again, transparency is the only way for people to know what is going on inside a company and enables the to hold them accountable and to make knowledgeable (consumer) decisions. To say that women are inherently less confident in their computing skills, is to totally ignore the male-dominated and therefore male-designed social institutions in which many obstacles have to be overcome. This week, I found a poc female on youtube talking about her career in coding and who recommended many resourced while talking about it in a transparent and non-elite way. This made me much more interested in it, and most importantly made me feel as if I could also find my place in male-dominated sectors. Also, talking two girls who participated in a summer residency of V2_Lab for Unstable Media and seeing their work made me feel more comfortable in that area already. Seeing yourself being represented certainly boosts your confidence in your own abilities. As stated in the article, "the inclusion of women becomes the solution for all gender problems, not just those of exclusion or absence. .. their mere presence builds the table they sit at in the first place." The ultimate goal is cognitive diversity, and cognitive diversity is correlated with identity diversity. That means it's not just about women in tech. It is about broad voices, broad representation.
I have been thinking about my internship lately, which was unpaid and in a male-led studio. I worked really hard and participated in many interesting projects. But by giving me the feeling I should already feel rewarded and appreciated by this mere participation felt empty in the end. I have been thinking about students who might not have done the internship because they could not pay their rent that way and how this influences the diversity within a studio. I believe that if you appreciate an intern, care for quality of work and giving everyone an equal chance to grow as a designer, you would pay them. This would in the end contribute greatly of cognitive diversity in the field of design, which is also has been male-dominated in the recent past.Biological determinism, as mention by the authors, is also something that is interesting during these times inn which the political landscape is under pressure. There is more unrest and focus on the pandemic, both reasons for governments to 'silently' change important laws within a country. Example of this is the current situation in Poland, were abortion rights have been almost entirely taken away from women. Former Polish Prime Minister Donald Tusk also criticised the judgement. "Throwing the topic of abortion and a ruling by a pseudo-court into the middle of a raging pandemic is more than cynical". The coronavirus crisis will be global and long-lasting, economic as well as medical. However, it also offers an opportunity. This could be the first outbreak where gender and sex differences are recorded, and taken into account by researchers and policy makers. Also for too long, politicians have assumed that child care and elderly care can be “soaked up” by private citizens—mostly women—effectively providing a huge subsidy to the paid economy. This pandemic should remind us of the true scale of that distortion and how balancing unpaid work out between all genders can lead to more diversity in fields such as tech and design as well.
0 notes
Text
Offre d'emplois : Artificial Intelligence : Advanced Topics and Social Issues Marrakesh - Morocco 2020
Nouveau poste sur https://is.gd/9eEs15
Artificial Intelligence : Advanced Topics and Social Issues Marrakesh - Morocco 2020
Artificial Intelligence : Advanced Topics and Social Issues 13-16 April 2020 , Marrakech
About Artificial Intelligence : Advanced Topics and Social Issues
Artificial intelligence is now a major research theme. Data manipulation has opened unimaginable fields of computer applications that have not been previously explored. Whether it’s economy, industry, science, education, medicine, urban management, space exploration, cultural production, management, social relations, … all areas of thought, creation and human action are now impacted by artificial intelligence.
A reflection to the IA not incorporating the context of its development would be incomplete. The rapid evolution of AI is driven by the efforts of the big multinationals that dominate the web and, through this media, the entire global economy (GAFAMA: Google, Amazon, Facebook, Apple, Microsoft, Ali Baba) and thousands of startups that gravitate around. It is also influenced by the convergence of several technologies and relies on imposing infrastructures: Internet, telecommunication networks, mobile telephony, data centers, satellites, networks and urban information systems, etc.
Moreover, the concept of disruption, which the datafication of the world and its algorithmizing allow, triggers the emergence of new economic models and leads to redefine the methods of regulating competition between companies.
Digital usage, boosted by the power of artificial intelligence algorithms, open up new perspectives for teaching that are increasingly adapted to the specific needs and profiles of learners as well as to different learning situations. They provide medicine with advanced technologies that can compete, and often exceed, certain human capacities.
Orginizers Artificial Intelligence : Advanced Topics and Social Issues
The laboratories (MISI), (IR2M) and (IIMSC) of Hassan I University in Settat in, Morocco, in partnership with the Institute of Iconomy in Paris in, France are organizing an international conference under the theme:
Artificial Intelligence: Advanced Themes and Societal Issues
From 13 to 16 April 2020 in Marrakech, Morocco
Strategy Artificial Intelligence : Advanced Topics and Social Issues
This conference is part of a strategy of collaborative research actions around the themes of artificial intelligence, Big Data, distributed algorithmic, Blockchain, game theory and their applications, and especially mathematical and computer science theories and foundations that underlie them.
This event therefore allows to create opportunities for meetings, exchanges, debates and networking at a single location for researchers and industry from different fields:
Researchers in mathematics and computer science who are developing advanced theoretical schemes and identifying and analyzing trends that will form the basis and draw the evolution of AI in the future
Manufacturers who are aware of the most advanced applications of technologies in different sectors: automotive, aeronautics, biotechnology, finance, trade, logistics, construction, training, …
AI solution experts and startup creators who are always passionate about the development of increasingly efficient algorithms, driven by the need to innovate in order to find applications that are always original
Humanities researchers who have scientific approaches and methodologies to analyze in depth the dynamics that underlie technological innovations and their impact on society and to shed light on the societal challenges they represent in terms of risk and opportunities.
Themes Artificial Intelligence : Advanced Topics and Social Issues
This conference will be an opportunity for university researchers, industrialists, public, private and associative actors to reflect and debate on four themes:
AXIS 1
Will answer the question: « What are the theoretical, mathematical and computer foundations that will decide the evolution of AI in the future?
What new scientific theories and approaches are likely to impact the evolution of artificial intelligence research?
How to articulate the different mathematical theories to improve machine learning and artificial intelligence processes?
What is the state of the art of research and the main trends in automatic language processing and image analysis?
To what extent could we develop a unified theory for the processing of semantics in different formats?
AXIS 2
Will answer the question: « How to design and implement solutions based on artificial intelligence to respond to industrial issues? « . It will be an opportunity to answer the following sub-questions:
What are the main uses of artificial intelligence in industry?
How to design innovative technological solutions for SMEs using artificial intelligence?
How to build interfaces of exchange and collaboration between scientific research and industries to promote the industrial appropriation of artificial intelligence?
What have been the real successes in applying artificial intelligence to the design of industrial 4.0 solutions?
How will Game Theory, Blockchain and Artificial Intelligence Contribute to Trading Transformation and International Trade?
AXIS 3
Will answer the question: « How to awaken competitive and creative innovation through artificial intelligence? « . It will be an opportunity to answer the following sub-questions:
How can artificial intelligence promote innovation, creativity, entrepreneurship and enterprise development?
What is the University’s role in artificial intelligence innovation, and how to innovate in teaching and training approaches?
What support model for young talent should be adopted to foster innovation?
How to decline the concepts developed in university research laboratories in innovative solutions applicable in business projects?
AXIS 4
Will answer the question: « How to identify and avoid ethical problems related to the use of artificial intelligence? « . It will be an opportunity to answer the following questions:
What are the main ethical and legal issues posed by the use of artificial intelligence?
After a succession of ethical problems related to the use of artificial intelligence algorithms, will we still be able to trust this technology?
How to deal with the issue of bursting responsibilities in innovations that use machine learning algorithms?
What approaches to adopt towards the general public to dispel the confusion between fantasies of science fiction and the scientific reality of artificial intelligence?
How to identify and organize a debate around the ethical issues of artificial intelligence in health?
How to define reliability criteria in machine learning models?
Which societal models in a world impacted by artificial intelligence?
This event is organized in honor of Professor Gilbert Touzot by the laboratories of Hassan University I:
The laboratories of the FST of Settat
Mathematics, Computer Science and Engineering Sciences (MISI)
IT, Networks, Mobility and Modeling (IR2M)
Computing, Imaging and Complex Systems Modeling (IIMSC)
Systems Analysis and Modeling and Decision Support (AMSAD)
In partnership with the Institute of Iconomy in Paris.
Other academic institutions will be involved in the organization of this international conference. Especially:
Mohammed VI Polytechnic University (UM6P) to Ben Guérir and through it the OCP Group.
Website Artificial Intelligence : Advanced Topics and Social Issues
https://aiatsi.com/
Formation Continue FST de Settat - Maroc
#artificial intelligence article#artificial intelligence benefits#artificial intelligence danger#artificial intelligence debate#artificial intelligence english#artificial intelligence examples#artificial intelligence movie#artificial intelligence pdf#Actualités#Annonces#Big Data#UX Design
0 notes
Photo

The LATEST Rendering Unconscious Podcast is live! I speak with Jacob Johanssen about psychoanalysis, technology, digital media, social media, reality tv, audience, object relations, the body & the implications in psychoanalytic theory & society, culture, etc. Enjoy! http://www.renderingunconscious.org/psychoanalysis/jacob-johanssen-senior-lecturer/ Rendering Unconscious Podcast is hosted by psychoanalyst Dr. Vanessa Sinclair, who interviews psychoanalysts, psychologists, scholars, creative arts therapists, writers, poets, philosophers, artists & other intellectuals about their process, work, world events, the current state of mental health care, politics, culture, the arts & more. Links to all sites where Rendering Unconscious Podcast posts can be found at www.renderingunconscious.org/about Jacob Johanssen's research is influenced by media and communication studies, psychoanalysis, psychosocial studies and critical theory. His work revolves mainly around two themes: exploring Freudian psychoanalysis as a theory and method for digital media research with a particular focus on conceptualisations of affect, as well as using psychoanalysis to think critically about contemporary digital culture more broadly. He is Course Leader for the MA Data, Culture and Society, a new interdisciplinary course on datafication and big data, which launches in 2019. Dr. Johanssen's research interests include audience research, social media, digital labour, psychoanalysis and the media, affect theory, psychosocial studies, critical theory, as well as digital culture. His work has appeared in triple C; the International Journal of Cultural Studies; Information, Communication & Society; Journalism Studies and other journals. He is the author of the monograph Psychoanalysis and Digital Culture: Audiences, Social Media, and Big Data (Routledge, 2019). He is a Founding Scholar of the British Psychoanalytic Council. He is also member of the Executive Committee of the Association for Psychosocial Studies (APS) and serves as its Membership Secretary. Dr. Johanssen is convenor of the Psychoanalysis at Westminster reading group. @jacob_johanssen https://www.instagram.com/p/BvAFvzbn_Ld/?utm_source=ig_tumblr_share&igshid=appeft1d6ww9
0 notes
Text
When self-monitoring becomes uncomfortably insinuate …
The fitness tracker craze has taken a paternalistic transform with a US university requesting students to wear wristbands. Has datafication gone too far?

Apples iPhone sales baffle but profit beats targets, said the headline. It turned out that Apple sold only 74.77 m iPhones in the fiscal first quarter of 2016, which is less than a 1% multiply on the same period a year ago. So what happens? The share cost plummets and Alphabet( aka Google) engulf Apple as the worlds more valuable company.
And right on cue, we get the usual kind of kindergarten analysis from the tech commentariat. Apple has run out of theories. It needs a brand-new breakthrough produce along the lines of the iPhone. The iPad was supposed to be that commodity, but its sales are slumping. And the Apple watch clearly isnt going to take its target etc, etc…
The only one of those hypothesis with which I concur is the latest. I bought an Apple watch a while back, on the principle that if you write about this stuff you should put your money where your keyboard is. Six months on, I find myself deeply underwhelmed by the design. Sure, it tells the time, but then so does a PS25 Timex.
It joins seamlessly with my iPhone, but you would expect that from a company famously good at associating together the products in its ecosystem. The watch shakes when an email or a text be coming back, which seemed useful at first( I could discreetly attend “whos” emailing or texting while in joins ). But the attractivenes instantly wore off: when you get as much inbound material as I do, the attractions of having a miniature wasp on your wrist soon pale.
When I confided my misfortune in the watch to some other useds, nonetheless, they reacted poorly, or at the least sceptically. Could I not appreciate the health and fitness affordances of the machine? It turned out that these guys and “theyre all” males, by the way were admission by the fact the watch enabled them to monitor their heart rates, fitness heights, activity charts and so on. It even reminded them every hour that they should get up from their screens and move about.
And then it dawned on me “theres” two various kinds of beings in the world: those who are haunted with the datafication of their bodies and those who are not. I belong to the latter category: the only thing that interests me about my center is that it is still hitting. And when it isnt I shall be past care. But if the present cult for wearable inventions such as fitness trackers is anything to go by, I may soon find myself a member of a hated minority, rather like cigarette smokers, whisky drinkers and partisans of David Icke.
Illustration by Matt Murphy.
The fitness-tracker obsession started out as a weird hobby of early adopters, but is beginning to acquire a harder border. Im told some houses are beginning to incentivise( ie coerce) hires even elderly administrations to wear Fitbit-type wristbands. In one case, it was so that companies, reportedly, could analyse high levels of employee stress, a touching sample of digital paternalism, politenes of the HR department.
And now it turns out that an clothe called Oral Roberts( which I had hitherto assumed was a firebrand of toothpaste, but is, in fact, an American private university) has stipulated that all its incoming freshmen must wear Fitbits to track their fitness degrees. In the past, the unfortunate students of Oral Roberts were obliged to note down the amount of steps and activity the government has been carried out in a fitness work. Henceforth, this will be to be undertaken by digital technology. Mens sana in corpore sano and all that.
Since premarital sex is forbidden on the Oral Roberts campus, one would have believed the authorities would want to use students Fitbit data to make sure “there werent” hanky-panky. But apparently they are not going to go down that road, although the technology can do it.( Its everyone to do with the rate of calorie burn, apparently .)
What is genuinely astounding about this infatuation with datafication is how far it has already extended. I know this because I came upon an provocative article by a New York University scholar, Karen Levy, published in, of all places, the Idaho Law Review . The entitlement Intimate Surveillance says everything there is. The nub of it is: whatever youre in to, there is an app for datafying it. Truly, Heidegger was right when he characterized engineering as the artwork of formatting “the worlds” so that you dont have to experience it. And he didnt even have an Apple watch.
The post When self-monitoring becomes uncomfortably insinuate … appeared first on apsbicepstraining.com.
from WordPress http://ift.tt/2FlavqO via IFTTT
0 notes
Text
Seeing like a state of exception | 例外状态的视角
In a bittersweet combination of personal achievement and global emergency, I moved to Norway and started working on a new research project right when the COVID-19 epidemic started sweeping across the world. When I was settling in my new office and sketching out the next three years of academic careering, friends and relatives were quarantined in Wuhan or bracing for lockdown in other parts of the world. And as I’m writing this post, I’ve moved back to my university residence right before the WHO declared the virus a pandemic and my department building was locked for a few weeks of state-enforced social distancing. Regardless, the project I’m excited to be part of is about machine vision, and I’ll be looking into how this technology is deployed and utilized in Chinese everyday life, which seems to be slowly getting back to a degree of normalcy. In the meantime, China has been using machine vision to look into itself and try to track down the virus as it spread through its citizenry.
Following the outbreak through social media and news headlines, I noticed the diverse roles that machine vision has been playing in this situation, and followed how this technology is embroiled in variegated discussions ranging from the widespread distrust of authoritarian surveillance and the enrolling of homegrown tech industries into state propaganda to enthusiastic paeans to datafication and digital governance. More than a centralized attempt of the government to enforce a degree of legibility onto its citizens – what James Scott would call “seeing like a state” – China’s response to the COVID-19 epidemic seems to be benefiting from a new kind of sensing that is enabled by digital technologies deployed as an integral part of a state of exception. In Benjamin Bratton’s words, this is an important phenomenon because
[t]he optical positions of a state—how it sees the world and its constituents and how its citizens see themselves reflected through the ambient qualitative commons—might bear all the benefits and bankruptcies of earlier forms of communicative reason. (2015, p. 121)
But how did China’s “optical position” vis-à-vis the COVID-19 outbreak consolidate?
In the early days of the contagion, once the (by now disputed) origin of COVID-19 was pinpointed to the Huanan Seafood Wholesale Market in Wuhan, the virus was mostly tracked by concerned citizens via vernacular forms of coveillance: people would take photos of license plates and share them on social media, warning each other about the suspect presence of Wuhan residents in their town or district. In the absence of official measures or recommendations, all that people had at their disposal were mobile cameras, social networks, and the imperfect geolocational information provided by car plates. This strategy was quickly adopted by local authorities, that started tracking the movement of drivers across provinces; partial reports would circulate as copy-pasted news or screenshots, stoking a climate of ambient fear about Wuhanese residents. For example, one WeChat comment alleged that data from the Shenzhen Public Security System and the Center for Disease Control identified 60,000 people entering Shenzhen from Wuhan, worrying the local government and prompting a warning for the local population to stay put and prepare to spend the Spring Festival at home.
Tech companies quickly reoriented their platforms towards this sort of cross-regional surveillance, drawing on intersecting streams of data: as another WeChat message warns, Alibaba had supposedly tracked 6,000 Wuhan residents conducting Alipay transactions in Shanghai over a single day, hinting at a massive exodus from the city and an impending danger for the coastal metropolis. For many, this repurposing of smart city systems and platform urbanism was a foreseeable confirmation of the state’s surveillance capabilities. As Liza Lin documents on the Wall Street Journal, tracking the movement of infected or at risk individuals was a readily available capacity of China’s surveillance apparatus, which extends from roads to subways and from smartphones to social media, combining visual identification with geolocation and other sorts of data trails.
Another intersection between COVID-19 and machine vision is more mundane, and revolves around people’s faces. As the virus spread, citizens resorted to using face masks (for a situated history of this object, see this), which were also made obligatory in the most affected provinces; yet, people quickly realized that wearing masks affected ths convenience and usability of face recognition systems including smartphone locks, community gates and payment terminals. Many noted the uncanny resonance with the proposed ban on face masks during the 2019 Hong Kong protests, which was supposedly motivated by the need to identify protesters; even though pandemic and protest are two radically different states of exception, they spotlight the face as a contested nexus of biopolitical affordances. Regardless, technological fixes often trump convenience in exceptional times: mask-vending machines that use face recognition to identify buyers and ration masks started popping up, and eventually a tech company in Chengdu developed a 3D face recognition system capable of bypassing face masks while also checking a person’s body temperature.
Besides car plates and masked faces, local tech companies have competed to play a central role in facilitating life under quarantine, which conveniently also helped spotlighting their commitment to public service by strengthening governmental responses. Alongside donations, emergency services and information platforms, artificial intelligence has predictably emerged as one of the central contributions of companies like Tencent, Baidu and Alibaba, which have offered their cloud computing platforms and predictive algorithms to researchers for efforts in tracking infection vectors and simulating drug compounds. A broader definition of AI has cemented the impression that this technology is powering China’s response to the epidemic, as exemplified by the many reports marveling at automated health information chatbots, autonomous vehicles used for disinfection and delivery, and a hospital ward “run entirely by robots” in Wuhan.
0 notes
Text
UK watchdog sets out “age appropriate” design code for online services to keep kids’ privacy safe
The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy of children online.
The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.
UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.
The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.
Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.
“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.
While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].
This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.
“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.
Here are the 15 standards in full as the regulator describes them:
Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance with this code.
Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.
The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.
So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.
However it’s not legally binding — so there’s a pretty fat chance of that.
Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable (and which include clear principles like ‘privacy by design and default’) — pointing out it has powers to take action against law breakers, including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.
So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’
Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.
The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.
“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.
“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”
“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.
“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”
Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.
“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.
Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.
But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.
UK quietly ditches porn age checks in favor of wider online harms rules
How comprehensive the touted ‘child protections’ will end up being remains to be seen.
Brown suggests age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.
The government has also been consulting with tech companies on possible ways to implement age verification online.
However the difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are now mired in geopolitics.)
While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress…
Privacy experts slam UK’s ‘disastrous’ failure to tackle unlawful adtech
from RSSMix.com Mix ID 8204425 https://ift.tt/2TTiUvU via IFTTT
0 notes
Text
When self-monitoring becomes uncomfortably insinuate …
The fitness tracker craze has taken a paternalistic transform with a US university requesting students to wear wristbands. Has datafication gone too far?

Apples iPhone sales baffle but profit beats targets, said the headline. It turned out that Apple sold only 74.77 m iPhones in the fiscal first quarter of 2016, which is less than a 1% multiply on the same period a year ago. So what happens? The share cost plummets and Alphabet( aka Google) engulf Apple as the worlds more valuable company.
And right on cue, we get the usual kind of kindergarten analysis from the tech commentariat. Apple has run out of theories. It needs a brand-new breakthrough produce along the lines of the iPhone. The iPad was supposed to be that commodity, but its sales are slumping. And the Apple watch clearly isnt going to take its target etc, etc…
The only one of those hypothesis with which I concur is the latest. I bought an Apple watch a while back, on the principle that if you write about this stuff you should put your money where your keyboard is. Six months on, I find myself deeply underwhelmed by the design. Sure, it tells the time, but then so does a PS25 Timex.
It joins seamlessly with my iPhone, but you would expect that from a company famously good at associating together the products in its ecosystem. The watch shakes when an email or a text be coming back, which seemed useful at first( I could discreetly attend “whos” emailing or texting while in joins ). But the attractivenes instantly wore off: when you get as much inbound material as I do, the attractions of having a miniature wasp on your wrist soon pale.
When I confided my misfortune in the watch to some other useds, nonetheless, they reacted poorly, or at the least sceptically. Could I not appreciate the health and fitness affordances of the machine? It turned out that these guys and “theyre all” males, by the way were admission by the fact the watch enabled them to monitor their heart rates, fitness heights, activity charts and so on. It even reminded them every hour that they should get up from their screens and move about.
And then it dawned on me “theres” two various kinds of beings in the world: those who are haunted with the datafication of their bodies and those who are not. I belong to the latter category: the only thing that interests me about my center is that it is still hitting. And when it isnt I shall be past care. But if the present cult for wearable inventions such as fitness trackers is anything to go by, I may soon find myself a member of a hated minority, rather like cigarette smokers, whisky drinkers and partisans of David Icke.
Illustration by Matt Murphy.
The fitness-tracker obsession started out as a weird hobby of early adopters, but is beginning to acquire a harder border. Im told some houses are beginning to incentivise( ie coerce) hires even elderly administrations to wear Fitbit-type wristbands. In one case, it was so that companies, reportedly, could analyse high levels of employee stress, a touching sample of digital paternalism, politenes of the HR department.
And now it turns out that an clothe called Oral Roberts( which I had hitherto assumed was a firebrand of toothpaste, but is, in fact, an American private university) has stipulated that all its incoming freshmen must wear Fitbits to track their fitness degrees. In the past, the unfortunate students of Oral Roberts were obliged to note down the amount of steps and activity the government has been carried out in a fitness work. Henceforth, this will be to be undertaken by digital technology. Mens sana in corpore sano and all that.
Since premarital sex is forbidden on the Oral Roberts campus, one would have believed the authorities would want to use students Fitbit data to make sure “there werent” hanky-panky. But apparently they are not going to go down that road, although the technology can do it.( Its everyone to do with the rate of calorie burn, apparently .)
What is genuinely astounding about this infatuation with datafication is how far it has already extended. I know this because I came upon an provocative article by a New York University scholar, Karen Levy, published in, of all places, the Idaho Law Review . The entitlement Intimate Surveillance says everything there is. The nub of it is: whatever youre in to, there is an app for datafying it. Truly, Heidegger was right when he characterized engineering as the artwork of formatting “the worlds” so that you dont have to experience it. And he didnt even have an Apple watch.
The post When self-monitoring becomes uncomfortably insinuate … appeared first on apsbicepstraining.com.
from WordPress http://ift.tt/2FlavqO via IFTTT
0 notes
Text
When self-monitoring becomes uncomfortably insinuate …
The fitness tracker craze has taken a paternalistic transform with a US university requesting students to wear wristbands. Has datafication gone too far?

Apples iPhone sales baffle but profit beats targets, said the headline. It turned out that Apple sold only 74.77 m iPhones in the fiscal first quarter of 2016, which is less than a 1% multiply on the same period a year ago. So what happens? The share cost plummets and Alphabet( aka Google) engulf Apple as the worlds more valuable company.
And right on cue, we get the usual kind of kindergarten analysis from the tech commentariat. Apple has run out of theories. It needs a brand-new breakthrough produce along the lines of the iPhone. The iPad was supposed to be that commodity, but its sales are slumping. And the Apple watch clearly isnt going to take its target etc, etc…
The only one of those hypothesis with which I concur is the latest. I bought an Apple watch a while back, on the principle that if you write about this stuff you should put your money where your keyboard is. Six months on, I find myself deeply underwhelmed by the design. Sure, it tells the time, but then so does a PS25 Timex.
It joins seamlessly with my iPhone, but you would expect that from a company famously good at associating together the products in its ecosystem. The watch shakes when an email or a text be coming back, which seemed useful at first( I could discreetly attend “whos” emailing or texting while in joins ). But the attractivenes instantly wore off: when you get as much inbound material as I do, the attractions of having a miniature wasp on your wrist soon pale.
When I confided my misfortune in the watch to some other useds, nonetheless, they reacted poorly, or at the least sceptically. Could I not appreciate the health and fitness affordances of the machine? It turned out that these guys and “theyre all” males, by the way were admission by the fact the watch enabled them to monitor their heart rates, fitness heights, activity charts and so on. It even reminded them every hour that they should get up from their screens and move about.
And then it dawned on me “theres” two various kinds of beings in the world: those who are haunted with the datafication of their bodies and those who are not. I belong to the latter category: the only thing that interests me about my center is that it is still hitting. And when it isnt I shall be past care. But if the present cult for wearable inventions such as fitness trackers is anything to go by, I may soon find myself a member of a hated minority, rather like cigarette smokers, whisky drinkers and partisans of David Icke.
Illustration by Matt Murphy.
The fitness-tracker obsession started out as a weird hobby of early adopters, but is beginning to acquire a harder border. Im told some houses are beginning to incentivise( ie coerce) hires even elderly administrations to wear Fitbit-type wristbands. In one case, it was so that companies, reportedly, could analyse high levels of employee stress, a touching sample of digital paternalism, politenes of the HR department.
And now it turns out that an clothe called Oral Roberts( which I had hitherto assumed was a firebrand of toothpaste, but is, in fact, an American private university) has stipulated that all its incoming freshmen must wear Fitbits to track their fitness degrees. In the past, the unfortunate students of Oral Roberts were obliged to note down the amount of steps and activity the government has been carried out in a fitness work. Henceforth, this will be to be undertaken by digital technology. Mens sana in corpore sano and all that.
Since premarital sex is forbidden on the Oral Roberts campus, one would have believed the authorities would want to use students Fitbit data to make sure “there werent” hanky-panky. But apparently they are not going to go down that road, although the technology can do it.( Its everyone to do with the rate of calorie burn, apparently .)
What is genuinely astounding about this infatuation with datafication is how far it has already extended. I know this because I came upon an provocative article by a New York University scholar, Karen Levy, published in, of all places, the Idaho Law Review . The entitlement Intimate Surveillance says everything there is. The nub of it is: whatever youre in to, there is an app for datafying it. Truly, Heidegger was right when he characterized engineering as the artwork of formatting “the worlds” so that you dont have to experience it. And he didnt even have an Apple watch.
The post When self-monitoring becomes uncomfortably insinuate … appeared first on apsbicepstraining.com.
from WordPress http://ift.tt/2FlavqO via IFTTT
0 notes