Tumgik
#because algorithms are not neutral unbiased things
fallloverfic · 1 year
Text
Folks do know you can search this site by tags, right? Like it's kind of clunkier than a lot of other places for that save for instagram, but if you're not seeing something on your dash in say, a fandom you're in, you can try searching by the tag to see if someone else has shared it? Tumblr isn't a self-producing algorithm sink like Tiktok, twitter, Facebook, or w/e (except for that random "recommended post" thing in the side bar every now and then). If you aren't following folks who post and/or reblog much, you're going to have to go out and find people, and the easiest way to do that is to browse by tag. Plug your thing of choice into the search bar, whether that's [insert movie name] or "pumpkins" and see what there is. Just because you're not seeing it on your dash doesn't mean no one's talking about it. And that's true for any site you're on.
0 notes
alyssaxhansen · 3 years
Text
BLOG POST #4 - DUE 09/16/2021
1. What are some ways in which you feel like your privacy was invaded because of technology algorithms?
I feel as though my privacy has been invaded every time someone uses DMV records to find out my political affiliation and find my phone number, email, or even my home address. This past Monday afternoon (9/13/21) I had someone ring my doorbell. Normally I wouldn’t answer it, but I have been expecting some packages. The man who rang my doorbell was holding a clipboard, so I assumed he needed me to sign for something. However, he was not delivering anything. He was a volunteer going around asking registered democrats if they had had a chance to vote no in the recall election. I was so taken back because getting my phone number and calling me is one thing but finding my address and coming to my home I feel is a total invasion of privacy.
2. Do you think platforms and/or algorithms can be free of bias and remain neutral?
I do not think that any form of technology can be completely neutral or free from bias. Algorithms and social media apps are made by real people who hold their own values and beliefs and even if they try to make unbiased platforms their views can seep into their work. Take Twitter for example, they have an algorithm that can sense when posts violate their terms and conditions. But who wrote those terms and conditions and how can we be so sure they are neutral? Additionally, in January of 2021 Twitter permanently banned former president Donald J. Trump’s account. This is not an unbiased or neutral stance. By blocking Trump’s access to the platform Twitter is allowing their judgements and beliefs to dictate who and who cannot post and what they can and cannot post.
3. What is one way that you have seen that the internet is “anti-black?”
One way that I have seen that the internet is “anti-black” is on TikTok. I have noticed that most of the creators that are shown in FYPs (for you pages) are white or white passing. I have noticed that even the POC (people of color) content creators that I follow are shadow banned. Meaning that these creators that I follow don’t show up on my FYP or my following page.
4. What does having a “normal” name mean, and what are some challenges that individuals who do not have a “normal” name face?
“Normal” names are names that are white sounding and are associated with white individuals. These types of names are used as a comparison against other names, especially the names of POC individuals. The individuals who do not have white sounding names often have more negativity in their lives. As children, these individuals with non-white sounding names are given nicknames because their given names are “too hard to pronounce.” As adults these individuals with non-white sounding names are less likely to be called back for a job, even if they have the same qualifications as an individual with a “normal” name.
References
Everett, A. (2002). The revolution will be digitized. Social Text, 20(2), 125–146. https://doi.org/10.1215/01642472-20-2_71-125
Benjamin, R. (2019). Race after TECHNOLOGY: ABOLITIONIST tools for the New Jim Code. Social Forces, 98(4), 1–3. https://doi.org/10.1093/sf/soz162
Noble, S. U. (2018). Algorithms of oppression. https://doi.org/10.2307/j.ctt1pwt9w5
National Association of Independent Schools (2018, June 22). Kimberlé Crenshaw: What is Intersectionality? [Video]. Youtube. https://www.youtube.com/watch?v=ViDtnfQ9FHc
8 notes · View notes
foodienamedmaddie · 3 years
Text
Week 4, Blog Post Due 9/16
Q1. Why is having your “cultural-lens” zoomed out so vital in terms of perspective with intersectionality? 
In the experience of a judge dismissing Emma DeGraffenreid claim referred to in the TED Talk by Dr. Kimberle Crenshaw, we see a major lack of perspective. In this case, a car manufacturing company claimed that they had both racial and gender inclusion, yet they lacked Black women in their workplace (Crenshaw, 2016). So keeping the mind open to recognizing not only racial and gender bias but the two combined is important. Workplace structures and their culture within hiring need to be checked to ensure there is fair and equal chance for minorities to be hired and thrive within a company. Growth can occur when industries are challenged and held accountable for having a diverse and inclusive environment. Accountability is especially significant when recognizing broken systems in which our society was built upon. Bringing attention and confronting tragedies and injustices among minorities, more specifically Black Americans in this case, is needed in order for change. #SayHerName. 
Q2. Is there a way algorithms can be neutral and objective?
I would suggest we change algorithms as soon as possible if we want to reach the goal of algorithms being neutral and objective. I had never heard of algorithmic oppression prior to taking this course- which is frightening- so I believe educating people on the topic itself is a great place to start to head toward impartial algorithms. The more algorithms are built into systems the more we see minority groups be discriminated against. Minority groups are already at a disadvantage enough, to have another major entity in society to go up against is unjust. Moving toward a more equal and just society becomes slowed when prejudice is embedded within technological systems. Technology is a fairly new creation and can be a great use to move toward  but it is significant for these systems to be created right. Companies should be demanded by law to create unbiased algorithms and if they fall short of those expectations then I believe there should be consequences. When there are situations of racism due to an algorithm failing that are brushed off, it deems it okay for other companies to loosely create algorithms. In addition, when things are done fast there is more room for error. 
Q3. How do codes affect society?
As Ruha Benjamin states in “Race After Technology,” codes were made to control society itself, and to have certain expectations when presented a code (Benjamin, 2019). Once one is coded, it is hard to be un-coded, so to speak. As mentioned in this piece, once someone is entered into a gang database it can be very difficult, almost impossible to be removed. And you cannot ignore the fact they were entered into the database because of where they live based on zip code and their names. This is significantly unjust as these people entered in the database become monitored. And now they have to live their life in fear because their name and residential area deems them as a gang member? People’s names can also be a determining factor of whether they get a job, accepted into college, accepted for a credit card, granted a home loan, and numerous other circumstances.
Q4. How can unequal access to the internet increase inequality in today’s world?
As technology is such an exponentially growing entity, it is almost imperative to have access to the internet to be successful. Especially in the circumstances due to COVID-19, children who are without technological equipment to access school material can put them at a disadvantage. The internet has allowed for development economically and socially, yet it is not equally accessible for all social groups causing a larger gap in inequality. 
References:
TED. (2016, December 7). The Urgency of Intersectionality | Kimberlé Crenshaw [Video]. YouTube. https://www.youtube.com/watch?v=akOe5-UsQ2o&t=3s
Noble, S. U. (2018). Algorithms of oppression. https://doi.org/10.2307/j.ctt1pwt9w5
Benjamin, R. (2019). Race after TECHNOLOGY: ABOLITIONIST tools for the New Jim Code. Social Forces, 98(4), 1–3. https://doi.org/10.1093/sf/soz162
4 notes · View notes
comradegeorgemoved · 3 years
Note
I'm genuinely happy that there is someone who is critical and unbiased but also likes talking about using one's potential to the fullest because that's just almost always great combination. I agree George has great potential and honestly I enjoy seeing how he probably mostly obliviously builds himself a lot of helpful connections. Dream has a lot of knowledge regarding yt algorithm but not many people mention how beneficial for George streams was Quackity's professionalism and experience. I don't mean to make it sound as George only relays on others, I just find his ability to get along and start at least pleasant relationships with other creators he interacts with very good trait of his and useful especially when this creators can be very helpful. I still remember that one instance when Quackity helped George just ban overly lengthy messages in chat and how Quackity called out Sapnap leaving streams early, it's just small things we could see but I wouldn't be surprised about Quackity giving George some general tips or at least influencing him with his professional attitude. It's also amazing for me how George can easily slip away from drama like for example recently with That Vegan Teacher that got some attention from mcyt fandom because of dueting Tommy's tiktoks if you know what I'm talking about. Basically she dueted George in his tiktok about walking around in the snow in her usually making it about veganism manner and I was ready for her to start hating on him more but a few hours later she dueted again with his tiktok about eating apple slices dipped in apple juice and she was smiling and gave him approval lmao. It was just so funny to me how neutral George is so it's hard to criticize him and even when someone tries to start it suddenly it looks like the universe itself prevents it, it's so funny honestly
i agree. with stories like quackity’s and dream’s of building your own content and fanbase essentially from nothing (obviously in the case of dream, george and sapnap helped but it was largely him who studied the yt algorithm and pushed them to join him in the first place), it’s easy to diminish success stories that come from both building from your own experiences as well as from others’. anyone who has been following george’s content can see that the people he associates with have influenced his streams (trying out new games, the subgoal, wanting to stream earlier, the raid-a-thons he karl and q are supposed to have, banning long messages, being more professional on streams), and that doesn’t make him a lesser streamer at all. it’s a huge advantage that he gets along with other people so well, because being close to them and seeing firsthand why they’re successful pushes him to make necessary changes to his own methods.
(and he rubs off on quackity, too! remember in the recent roblox stream, when q put his chat in sub-only and then said george calls it “money mode” lol. minor but i love them ok)
as for “slipping away from drama,” i know this isn’t what you meant to discuss but i don’t think any CC would respond to That Vegan Teacher if that’s what you meant? eating meat is hardly a strong critique and i don’t think they would ever take it seriously, because... why. did people actually get into drama over That Vegan Teacher dueting tommy? :O plus no one “cancels” george because, apart from being very very careful about what he does online (avoiding cursing, nsfw jokes, sharing any strong opinion + overthinking his interactions with “problematic” people like belle delphine + being genuinely afraid to play survive the internet), he just doesn’t say anything in the face of criticism, even legitimate ones. it definitely keeps him out of trouble, but it always upsets me when he’s just aggressively silent.
23 notes · View notes
itsmedianuh · 4 years
Text
Week 4: 09/16/20;
1. In what ways do algorithms contribute to racism towards marginalized groups?
When we stop to think about technology, we expect things to be unbiased, neutral, and overall ‘fair game’, however, when we think about the human bias involved in creating and contributing towards these platforms, it is far from neutral or just. In the reading, “Algorithms of Oppression: How search engines reinforce racism”, Noble discusses an example of how these algorithms contribute towards these biased structures that further drive a specific narrative that promotes these racist approaches towards marginalized groups. “While Googling things on the Internet that might be interesting to my stepdaughter and nieces, I was overtaken by the results. My search on the keywords “black girls” yielded HotBlackPussy.com as the first hit.” Experiences like these are what constantly feeds into the sexualization, degradation, and objectification of women of color. Humans are far from ever being objective as everyone carries their own biases, whether we are aware of them or not, but it is up to us to not let these biases get in the way of creating a safe environment for everyone from different backgrounds to feel like they belong and are getting equal representation.
2. How must we approach in dismantling and addressing the racism that exists on the internet and media? 
In reading, “Algorithms of Oppression: How search engines reinforce racism”, Noble presents the type of mindset that one should have when approaching the internet and media; “At the very least, we must ask when we find these kinds of results, Is this the best information? For whom? We must ask ourselves who the intended audience is for a variety of things we find, and question the legitimacy of being in a “filter bubble”, when we do not want racism and sexism”. It is important first to realize that the internet and technology carries its own biases, especially companies that might have underlying purposes for how certain algorithms are presented, for example the example of Google in the previous response. This applies to anything on the internet because as unbiased as platforms may claim to be, human involvement in any of this will almost always make it biased. The main thing to realize in these situations is to ask ourselves these questions that Noble stated, and be critical and aware of the purpose, intended audience and what underlying message may be lingering on these platforms and its given information.
3. What influence does an individual’s name carry? 
A name holds a lot of power and information about an individual and their identity: their culture, their history, their language, and so much more. In Benjamin's “Race after technology: Abolitionist tools for the New Jim Code”, Benjamin explores how a person’s name could easily be used against them and possible opportunities even with technology as “they found that the algorithm associated White-sounding names with “pleasant” words and Black-sounding names with “unpleasant”...researchers show that, all other things being equal, job seekers with White-sounding first names received 50 percent more call backs from employers than job seekers with Black-sounding names”. In this reading, we see how much a name could easily be used as a factor to determine how specific individuals may benefit from their name and how others may face that descrimination even with the advancement of technology. This carries out for more than just the working field and could be seen in other platforms like society, criminalizing individuals, profiling, politics, education, and outing others based on just these small differences. 
4. In what ways have these platforms, both digital and print, allowed for black communities to fight against systematic oppression? 
In the reading, “The Revolution Will Be Digitized: Afrocentricity and the Digital Public Sphere”, Everett explores the different manners in which a digitized revolution is expected to take place and will take place and its examples of how it is happening right now. I come to very much agree with Everett as the internet is a place that allows for anyone to log on and express themselves and for individuals to create and form communities who share the same views, drive, and experiences, leading into the active act of a revolution and reform towards matters that are important to them. “The earliest black political pamphlets, newspapers, magazines, and other forms of black writing established a tradition of protest literature that has been a prominent feature throughout the history of the press’s “uplift” mission, or journalistic freedom fighting. Equally important as its struggles for racial justice, particularly during heightened moments of political and economic crises, was the press’s role as cultural arbiter and promoter”. This approach has always been present and could be seen how these platforms whether it's the press, writing, journalism, or now: technology, these methods of communication bring awareness from communities to better obtain equal opportunities and change. It happened before, and this digitized revolution is expanding because the internet allows for a global reach to come together through these quick and fast methods of communication, bringing on change at a faster and more present rate and manner.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Cambridge: Polity.
Everett, A. (2002). The Revolution Will Be Digitized: Afrocentricity and the Digital Public Sphere. Social Text, 125-146.
Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
15 notes · View notes
pinkrangerv · 3 years
Text
Internet Safety Lessons, Pt 3
Okay, so we know not to talk to strangers, but how do we know when we can trust friends? How do we know if we can believe that post claiming cops are selling NFTs of George Floyd (they aren't)? How do we know if the spill of fuel in Hawai'i is being cleaned up (it is, and the Army deployed to help)?
Here we get to the fun part: Verifying what you can and can't trust.
The first rule of thumb is this: If someone posts it on Tumblr, Discord, or any other social media (ESPECIALLY Facebook, their algorithm fucks up so bad), just...google it. (DuckDuckGo it, but you know what I mean.)
It doesn't matter if it sounds right. It doesn't matter if it sounds wrong. Just search every. Damn. Post. (Or ignore them, but if you want to learn if something's true, go look it up.) Various news organizations will have articles on this. You can trusts sources like the Associated Press, Al-Jazeera English (for non-Middle Eastern news, they understandably struggle with being unbiased for their home turf), CNN, and if you want to pay a starving journalist, NYTimes and Washington Post.
Other sources have more bias. MSNBC is biased, but it's usually fairly easy to tell what's opinion and what's fact. Fox News, OAN, and other right-wing sites will actually just lie and call it fact. As I said, Al-Jazeera English, being based in the Middle East, struggles with bias against Israel for some very understandable reasons, but is usually a good, neutral resource for anything outside of their home turf.
If something sounds like it's not making sense, google 'History of X Conflict' and find the Wikipedia article on it. Wikipedia isn't an acceptable essay source, but for a basic primer on things like history, it does very well, especially since they've marked where sources are questionable.
Now, that's all well and good, but at some point you're going to make friends and that's not like searching a news article. You can't--and shouldn't--dig up dirt on potential friends to screen them. So how do you tell if they're genuine?
To start with, look at the posts you follow. Do they post things in line with their beliefs? Do they claim one thing and post another? If you're trans, you probably don't want to hang out with the person who's got pronouns in their bio but is atwitter over the Fantastic Beasts release, because their true stance on this issue might be uncomfortable for you. If you're autistic, you probably don't want to hang out with people who unironically proclaim things are 'cringe', because they'll turn on you eventually.
Any action you know of can back this up (or not). Do they talk about doing things to support their beliefs? Are they telling you about groups that do what they say they do? Are they offering emotional support when they say they will? Is it really helpful, or just pressuring you into feeling better?
These are things that will tell you if someone is trustworthy. If you think they are, start with a low level of information--a first name only, or a hint as to where you live 'x amount of time outside of city in profile'. See what they do with it. Go from there.
These are all the things I can think of to teach you bunnies about safety online, but have at it. Now I'm off to reconfigure my old usernames, because I've got some security updating to do myself.
0 notes
art-in-the-age · 4 years
Text
Part 2: Witnessing Conflict
Young people are increasingly getting their news through the lens of social media, which makes it that much more essential to understand the way different platforms refract information. For as long as people have used social media, the content posted has reflected current events more generally, something that is becoming especially acute as time passes. Tumblr users bore witness to several conflicts that unfolded across the world in the year 2014, something Rosemary Pennington chronicled in her article in the International Communication Gazette, “Witnessing the 2014 Gaza War in Tumblr”, through which she explores how several Muslim Tumblr users interacted with and witnessed the violence occurring towards Palestinians during the 2014 Gaza War. She writes in her introduction, “Traditionally, it has been witnessing that can make us feel close to those suffering through the violence we see in media as well as others we imagine are in the audience witnessing the event with us,” (Pennington). Tumblr as a platform provides both a means to witness the violence, as well as a community of fellow witnesses, inspiring feelings of closeness that would heighten emotions. In the case of the Gaza War, the bloggers take note of the fact that the mainstream media centers the experiences of Israelis and largely neglects Palestinian suffering in the construction of their narrative (Pennington). Through the usage of Tumblr, Palestinians can share photos and narratives that reflect their experiences, which can then be disseminated by bloggers elsewhere in the world, such as those who were the subject of Pennington’s research. The platform provides the space to construct an Oppositional Gaze, in the words of bell hooks. hooks writes of the oppositional gaze, “By courageously looking, we defiantly declared: ‘Not only will I stare, I want my look to change reality.’ Even in the worse circumstances of domination, the ability to manipulate one’s gaze in the face of structures of domination that would contain it, opens up the possibility of agency,” (hooks 116). Palestinians are able to control their gaze in a way that stares back at those who are oppressing them, counteracting the narrative that they are the sole aggressors and thus giving them agency. Tumblr elevated the narratives of Palestinians to the point where they could be held in conversation with and in contradiction to those pushed by wealthy media conglomerates. Communities centered around sending aid can also be formed on the platform which is only possible through the shared experience of witnessing. Pennington posits with her research that Tumblr was a crucial piece in raising global awareness of the situation in Gaza, a lasting impact of the platform.
Six years later, the world is no less familiar with incredible amounts of violence and suffering, especially as we live through the COVID-19 pandemic. Relegated to our houses, many Americans turned to TikTok for entertainment but found within it a well of resources for activists as the nation erupted in protests this summer in response to the killing of George Floyd and other Black Americans. TikTok, like Tumblr, allowed the average citizen to both bear witness to violence and share their narrative of the situation without it being refracted through the lens of a mainstream media source. TikTok, however, is still plagued by the same issues endemic to the platform; All content distribution is of course driven by the algorithm, which incentivizes outrageous or highly emotional content, raising the stakes to a point that may desensitize viewers after a certain amount of information. The algorithm can also end up prioritizing only a few voices, typically those who already have a platform. This in turn creates its own hierarchy which, although independent from traditional news networks, is still exclusionary. A lot of the information viewed is not controlled, as the primary interface on the app is the For You Page; if the average user is not putting in effort to control the type of information and content they are viewing, it’s not likely that they will put in effort to ensure that it is accurate or unbiased. 
TikTok and Tumblr users alike are fond of their image-based communities and continue to source them on the same platform that they source their news, the unintended consequence of which being the fascist aestheticization of politics as theorized by Benjamin in his 1935 essay, “The Work of Art in the Age of Mechanical Reproduction”. He writes, “All efforts to render politics aesthetic culminate in one thing: war,” and later continues, “Mankind, which in Homer’s time was an object of contemplation for the Olympian gods, now is one for itself. Its self-alienation has reached such a degree that it can experience its own destruction as an aesthetic pleasure of the first order. This is the situation of politics which Fascism is rendering aesthetic,” (Benjamin 19-20) In the context of 2020 civil unrest, on TikTok, the juxtaposition of violent oppression with daily vlogs from teens in thrifted clothes dancing around big cities has led to both being subsumed into a dominant identity that holds “activism” as a core component. To truly be a member of the alt-TikTok community, one should be a self-identified leftist and activist. Both are noble ideas, and pushing for more accessible leftist literature is not a bad thing, but the issue arises when those looking for membership in the community are not willing or unable to do the work. The process of unlearning carceral understandings of justice and the subtle ways in which racism is intertwined in our everyday lives is a conscious, long, and oftentimes difficult process, that teens are undertaking with the ultimate goal being membership in a community of which the spokespeople are predominantly white and wealthy. The shortcut has become adding “BLM” and “ACAB” to a user’s bio, signaling to other users that they are socially aware. Memes that consisted of a cartoon character, such as Hello Kitty, saying “ACAB” were added to profiles, repositioning the acronym with long traditions in anti-racist and leftist activism as an aestheticized trend. The acronym is not entirely devoid of meaning, because leftist circles extend far beyond the teenage communities on TikTok, but to this new generation, adding ACAB to a bio means less a radical resistance to the carceral state and more a display of performative activism. This practice has led to the acronym being reappropriated into the pejorative term “Emily ACAB”, which typically refers to a wealthy, white teenage girl attempting to be performatively woke without renouncing any of her privileges. Emily ACAB is the rebellious teen daughter of the Karen who uses a movement meant to protect the lives of systematically marginalized groups as a way to separate herself from her family that “just does not understand” but ultimately won’t take too strong of a stance if it means sacrificing something of importance to her. The aestheticization of politics neutralizes the message, something that Benjamin knew all too well, and that TikTok teenagers, many of whom are well-meaning, now find themselves falling victim to. 
Despite being only separated by six years, teens in 2020 find themselves living and comprehending current events in a dramatically different world. No generation comes of age without a tremendous amount of hardship, personal and interpersonal, but Gen-Z is the first to have that hardship published on the internet. Social media has revolutionized organizing in many ways for the better, but as with all developments, it is one that requires active participation and checking of power. TikTok and Tumblr have made positive contributions to activism, but the nature of social media’s democratization of information requires we all pay attention to ensure neither platform does more harm than good.
0 notes
youtube
Tumblr media
college essays writing
About me
College Application Essay Worksheets & Teaching Resources
College Application Essay Worksheets & Teaching Resources They do not know what admissions officers are looking for. For the same cause, I don't suppose English lecturers make great admissions essay readers. Your English teacher reads your essay as 1 out of 30. Rest assured – your personal information might be stored secret. – we write each essay from scratch by our admission consultants. one hundred% necessities compliance, excessive-high quality writing, catchy and remarkable content are assured. Apart from writing companies, you may get essay samples on our website. We supply more than 20 totally different admission essay samples free of charge. The admissions officers reads as 1 out of one thousand’s and possibly even 10,000 or extra. Your English teacher reads your essay to assign one grade out of many. The admissions officer reads to determine if they should offer you one spot out of in all probability relatively only a few. Many applicants will have high GPA’s and SAT scores, volunteer in an area organization, or be the president of a club or captain of a sports team. Yes, it is perfectly okay to have your mother and father edit your essays. However, the key is to edit, not to write them for you. They can help with typos, grammatical errors, and allow you to to be clear, concise and compelling. They’re in search of good fits as they make up their rosters. Get the school essay help you want, proper if you want it with the convenience of on-line classes. A competently compiled admission essay will assist the applicant to focus on profitable features of his educational life and provides good account of himself. In this writing, admission committee should see integrity and deep persona with fantastic qualities and experiences that aspire to the event and new information. An admission essay is a professional project, which ought to be well composed, reviewed, with no grammatical mistakes. Click to obtain essay samples and use them for inspiration. We will assign this order to jurisprudence skilled who knows what's necessary to put in writing about to enroll at a regulation college. They know you best, typically more than you realize yourself so they might have good ideas. However, you do need the essays to sound like you; it must be your voice. There must be some consistency between the essays and interviews. I do not consider that folks make good essay editors because they don't seem to be admissions officers. NCSA staff members regularly put together spotlight movies that put a participant’s skills in the most effective mild attainable. Its message heart also helps simplify the method of talking with coaches. Meanwhile, the organization’s matching algorithm helps student-athletes decide which colleges would probably be good matches for them. This group was based by a former high school-turned-school soccer participant who had struggled via his recruiting expertise in the 1980s and needed to assist those following him. As a end result, NCSA has helped more than a hundred and fifty,000 scholar-athletes discover spots on school groups over the past couple of decades. Admissions officers are on the lookout for one thing, anything, to distinguish your essay from the pile. The emphasis have to be on “assist” and not, “take over.” Parents, with solely the most effective intentions, will often supply a lot of input and comments, which their youngster will gratefully settle for. The hazard there may be that the essay starts sounding extra like a forty one thing grownup, as a substitute of a highschool senior. I think it is at all times best for a scholar to have an neutral individual do the proofing. It is difficult for parents to remain unbiased and often it can cause a lot of added pressure between the scholar and parent. It is, nevertheless, a good idea for the parents to help the student brainstorm ideas for the essay previous to writing it. We will write it carefully offering free amendments and revisions. The essay is intended to attract the attention of an enrolment board to data and skills of the candidate. To obtain this aim, one must assure that an admission essay distinguishes him from other candidates. we understand how necessary it is nobody is aware of that you just ordered your admission essay online.
0 notes
stagepaul2-blog · 5 years
Text
Why diversity in artificial intelligence development matters
There is a flawed but common notion when it comes to artificial intelligence: Machines are neutral — they have no bias.
In reality, machines are a lot more like people: If it’s taught implicit bias by being fed non-inclusive data, it will behave with bias.
This was the topic of the panel “Artificial Intelligence: Calculating Our Culture” at the HUE Tech Summit on day one of Philly Tech Week presented by Comcast. (Here’s why it matters that Philly has a conference for women of color in tech.)
“In Silicon Valley, Black people make up only about 2% [of technologists],” said Asia Rawls, director of software and education for the Chicago-based intelligent software company Reveal. “When Google Analytics labeled Black people as apes, that’s not the algorithm. It’s you. The root is people. Tech is ‘neutral,’ but we define it.”
“Machine learning learns not by itself, but by our data,” said moderator Annalisa Nash Fernandez, intercultural strategist for Because Culture. “We feed it data. We’re feeding it flawed data.”
Often, the flaw is that the data isn’t inclusive. For example, when developers assume that the tech will react to dark skin the same as light skin, they’re creating a neutrality that doesn’t actually exist — so, an automated soap dispenser won’t sense dark skin.
“Implicit bias in computer vision technology means that cameras that don’t see dark skin are in Teslas, telling them whether to stop or not,” said Ayodele Odubela, founder of fullyConnected, an education platform for underrepresented people.
If there’s a positive note, panelists said, it’s that companies are learning to expand their data sets when a lack of diversity in their product development becomes apparent.
AI can expose bias, too. Odubela works with Astral AR, a Texas-based company that’s part of FEMA’s Emerging Technology Initiative. The company builds drones that can intervene when someone — including a police officer — pulls a gun on an unarmed person and actually stops the bullet they fire.
“It can identify a weapon versus a non-weapon and will deescalate a situation regardless of who is escalating,” Odubela said.
What can be done now to make AI and machine learning less biased? More people from underrepresented groups are needed in tech, but even if you’re not working in AI (and even if you’re not working in tech at all), there’s one ridiculously simple thing you can do to help increase the datasets: Take those surveys when they pop up on your screen, asking for feedback about a company or digital product.
“Take a survey, hit the chatbot,” said Amanda McIntyre Chavis, New York ambassador of Women of Wearables. “They need the data analytics.”
“People don’t respond to those surveys, then they complain,” said Rawls. “I always respond, and I’ll go off in the comments.”
Ultimately, if our machines are going to be truly unbiased anytime soon, there needs to be an understanding that humans are biased, even when they don’t mean to be.
“We need to get to a place where we can talk about racism,” said Rawls.
If we don’t, eventually the machines will probably be the ones to bring  it up.
-30-
Tumblr media
Source: https://technical.ly/philly/2019/05/07/importance-of-diversity-in-artificial-intelligence-development-hue-tech-summit/
0 notes
Text
Tumblr media
Algorithms of Oppression is a belief in which many believe that the internet is unbiased and neutral, however, that is not the case in the eyes of other people who argue that there are people in society who create algorithms with a negative mindset to have sexist or racially discriminating beliefs. These type of people are the people who are apart of our society and some are higher authority, such as, the government, political leaders, celebrities, and the list goes on. According to Safiya Umoja Noble, and her best-selling book Algorithms of Oppression, challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem. Noble argues and goes into detail about the combination of private interests in promoting certain sites like Google and other Internet search engines. She explains how these search engines lead to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.
Google is and has been one of the largest companies in the world and the people controlling it and behind it have different beliefs of their own. Noble explains how one day she made a google search for “black women” and the first thing that came up was a porno website featuring black women. This is a major issue and defines black women as sex objects. When you google “white women” the first things you see are happy white women smiling with jobs, clothes on, and successful. It should be the same with any other race and gender but its not which is why this is such an issue. Google being such a dominant application in our world means that it faces very little competition and due to this the company does not often face consequences for its actions because people are still going to use it. If the only information being seen is negative or stereotypical concerning black people, than that is all people will believe to be true. This is where a negative connotation can be projected on certain races and genders. 
The picture above was chosen by me to show how simple it is to go to Google and type in anything that comes to mind. Google may just be using what people’s top searches are, for instance, when Noble searched black women and the pornography that came up due to consumers consistently searching for these two topics together. Nevertheless, Google as a company certainly understands its impact on the world is huge and they now have the opportunity to help change racial and gender issues. If Google is able to change the search for black women and other searchers to come up with something positive or uplifting it may change the way people think over time and it can save there company from any future turmoil. 
0 notes
endenogatai · 4 years
Text
Europe sets out plan to boost data reuse and regulate “high risk” AIs
European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.
It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.
Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.
Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”
The top-line proposals are:
AI
Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
A voluntary labelling scheme for lower risk AI applications
Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc
Data
A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health
The full data strategy proposal can be found here.
While the Commission’s white paper on AI “excellence and trust” is here.
Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.
A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.
Tech for good
At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.
The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.
The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.
The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper
Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”
She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.
“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.
“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”
Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.
“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.
Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.
“More than ever a green transition and digital transition goes hand in hand.”
On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.
“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.
“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”
“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”
Trustworthy artificial intelligence
On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.
The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.
On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.
To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.
If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.
The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.
Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.
If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.
In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.
Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”
“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”
She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.
She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.
“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”
“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”
Towards a rights-respecting common data space
The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.
Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.
Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.
“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.
The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.
The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.
But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.
The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.
Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.
There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.
Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.
The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.
Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)
The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.
But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.
It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.
Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.
While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .
“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”
At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.
Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.
“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.
The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU
Do you want to know more on the EU’s digital strategy? Use #DigitalEU to share your questions and we will ask them to Margrethe Vestager this Thursday. pic.twitter.com/I90hCR6Gcz
— European Commission (@EU_Commission) February 18, 2020
Platform liability
There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.
That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.
During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.
“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”
Internal market commissioner, Thierry Breton
from RSSMix.com Mix ID 8204425 https://ift.tt/2SJJo2h via IFTTT
0 notes
ormlacom · 7 years
Text
Why you want blockchain-based AI, even if you don’t know it yet
Something every woman should know - WHY MEN LIE!
GUEST:
The other night, my nine-year-old daughter (who is, of course, the most tech-savvy person in the house), introduced me to a new Amazon Alexa skill.
“Alexa, start a conversation,” she said.
We were immediately drawn into an experience with new bot, or, as the technologists would say, “conversational user interface” (CUI).  It was, we were told, the recent winner in an Amazon AI competition from the University of Washington.
At first, the experience was fun, but when we chose to explore a technology topic, the bot responded, “have you heard of Net Neutrality?” What we experienced thereafter was slightly discomforting. The bot seemingly innocuously cited a number of articles that she “had read on the web” about the FCC, Ajit Pai, and the issue of net neutrality. But here’s the thing: All four articles she recommended had a distinct and clear anti-Ajit Pai bias.
Now, the topic of Net Neutrality is a heated one and many smart people make valid points on both sides, including Fred Wilson and Ben Thompson. That is how it should be.
But the experience of the Alexa CUI should give you pause, as it did me. To someone with limited familiarity with the topic of net neutrality, the voice seemed soothing and the information unbiased. But if you have a familiarity with the topic, you might start to wonder, “wait … am I being manipulated on this topic by an Amazon-owned AI engine to help the company achieve its own policy objectives?”
The experience highlights some of the risks of the AI-powered future into which we are hurtling at warp speed.
It’s a reminder that big companies, such as Amazon, have traditionally had big advantages when it comes to big data and AI.
The trust problem with centralized big data
According to Trent McConaghy, CTO of BigChainDB, AI took a huge evolutionary step forward in 2001. This was when two Microsoft researchers named Banko and Brill discovered something that now seems obvious to all of us: The bigger the data set you’re analyzing by orders of magnitude, the lower the error rates you get.
The era of Big Data was officially upon us and the race was on.
But if the race is about gathering, storing, and analyzing as much data as possible, then who is in the pole position to win? That’s right, the FANGs in the U.S. (Facebook, Apple, Netflix, Google), the BATs in China (Baidu, Alibaba, Tencent), and the wealthy Fortune 1000 or so multinational corporations.
They are the only ones with the reach and capital to get more data, store it, analyze it, and build AI models on top of it. What’s more, they are the only ones who can offer starting salaries in the $300,000 to $500,000 range and top-tier salaries that extend into to seven and eight digits. Your son or daughter may not make it to the NBA or NFL, but become a top AI scientist and you’re doing great.
The net effect of all of this is that the rich become even richer and more powerful and the barriers to innovation become even higher.
It is not only innovation that suffers, however. The closed nature of big-company AI means society must put its trust in “black boxes.”
Let’s look at how AI works to help make this clear. There are three layers that are essential
The data repository
The algorithm/machine learning engine
The AI interface.
If you are going to trust your decision-making to a centralized AI source, you need to have 100 percent confidence in:
The integrity and security of the data (are the inputs accurate and reliable, and can they be manipulated or stolen?)
The machine learning algorithms that inform the AI (are they prone to excessive error or bias, and can they be inspected?)
The AI’s interface (does it reliably represent the output of the AI and effectively capture new data?)
In a centralized, closed model of AI, you are asked to implicitly trust in each layer without knowing what is going on behind the curtains.
For a simple conversation with a nine-year-old, this may not be the end of the world. But for certain African-American criminal defendants, the implications can be life-altering: According to both the New York Times and Wired, the use of a proprietary machine-learning system called COMPAS, which is used by courts in many parts of the U.S., actually recommends longer prison sentences for blacks than whites, with all other data points being equal.
In effect, the AI makes racially-biased decisions, but no one can inspect it, and the company that makes it will not explain it. It’s closed, it is hidden, and models like these are in the hands of big, powerful companies have no incentive to share them or reveal how they work.
How blockchains level the playing field and add trust
Over time, more and more data will flow into blockchains, and that will reduce the big data advantage that the FANGs, BATs, and Fortune 1000 have over the little guys.
As Deepak Dutt, CEO of AI-based identity proofing company Zighra says, “When data is commoditized, AI algorithms become the most valuable part of the ecosystem.” In other words, we’ll see a power shift from those who own big sets of data to those who build smart, useful algorithms.
That’s great, but if we’re moving data to blockchains, some big, thorny questions still exist. For example:
Where does the data go?
How is it discovered and utilized?
Why would people put their data in there?
And don’t the “big guys” still have a huge advantage in terms of building powerful AI?
Welcome to the world of Blockchain+AI.
3 blockchain projects tackling decentralized data and AI
A number of projects have popped up to reward people through cryptographic tokens for making their data available through a decentralized marketplace. The result could be ever-more accurate AI models and the ability to create valuable conversational user interfaces, all with the trust and transparency that blockchains offer.
We are going to look at three of them.
1. Ocean Protocol. On the repository level, the Ocean Protocol aims to create a “decentralized data exchange protocol and network that incentivizes the publishing of data for use in the training of artificial intelligence models.” Put more simply, if you upload valuable data to the Ocean network and your data is used by someone else to train an AI model, you are compensated.
Let’s take one of my favorite examples, my Nest thermostat. Right now, data is uploaded constantly from my thermostat to Google. With data from me and all other Nest owners, Google has a really strong data set against which it can build AI services that could, for example, know when someone should send an offer of insulation or new windows to my house.
That data, which is mine (and yours), has value, but Google currently gets it for free.
What if, however, an enterprising home automation AI scientist (let’s call her Alice) believes she can build a better model than Google can?
In the Ocean model, Alice would license your data (and the millions of other data points out there) and compensate you with some amount of Ocean tokens.
Now think even bigger …
All of that data you are giving away for free (Nest, Fitbit, Hue lights, Ring doorbell, and every other IOT device out there) now has
data integrity (everyone knows the source of the data)
clear ownership (you)
and thanks to cryptocurrencies and blockchains, a cost-effective way to buy and/or lease it.
You’re happy, since you’ll be getting compensated for something you’re currently giving away for free. Alice is happy, since she (eventually) will have access to the same dataset that Google has. Boom — playing field leveled, thanks to an open data marketplace. And we’re all safer from bias and error because the AI built on this data comes with more transparency, since the data sets that inform the models are known.
Another notable player in this space is IOTA, which already launched its marketplace.
2. SingularityNet. Now, let’s say Alice has really cracked the code on a powerful AI algorithm that could help marketers, government officials, or environmentalists understand how weather patterns affect energy consumption. That’s where SingularityNet comes in, focusing on the AI level.
SingularityNet, which just closed its hotly anticipated ICO and has a strong leadership team, including AI pioneers Ben Goertzel and David Hanson, aims to be the first AI-as-a-service (AIaaS) blcockhain-based marketplace. In their world, Alice offers up her model (for sale or rent) to others for use against their own dataset. Thanks to a standardized AI taxonomy, a search engine helps users discover and rapidly integrate Alice’s model with complementary models, creating even more powerful and better trained models.
Coming back to our Nest example, let’s say that Alice’s model is built to study the home energy market in New York City. Combine that with models for Newark, Stamford, and Long Island, and you can start getting even better insights about tri-state area consumption.
Since ownership of the model is clear (it belongs to Alice), her intellectual property is protected. Every time her model is used, she is compensated in SingularityNet’s AGI tokens (AGI being the acronym for Artificial General Intelligence). Now you have the data sets that the big guys have AND access to the AI models they have as well.
For those of you familiar with the crypto space, the project will sound a lot like Numerai, albeit with a more broad focus than the hedge-fund disintermediation objective Numerai has.
The implications of a successful rollout of the more broadly focused SingularityNet on every industry could be quite dramatic. It should lead to an arms race in terms of AI models among industry competitors and will likely impact the required skill sets for jobs of the future.
3. SEED. Finally, at the interface level comes SEED, a project that is looking to give us all confidence that we can actually trust the bots in our lives.
According to SEED, “The bot market is estimated to grow from $3 billion to $20 billion by 2021,” a projection that means interactions like the one my daughter and I had with Alexa will become much more common and potentially more risky. After all, even if you completely trust Amazon, there’s still the possibility the bot you are interfacing with has been hijacked.
The solution for this is the combination of the SEED Network, the SEED Network Marketplace, and the Seed Token.
The SEED Network is an open-source, decentralized network where any and all bot interactions can be managed, viewed, and verified. It is also the framework for ensuring that the data fed into the AI via the conversational user interface aka “bot” can be assigned a data owner who can be compensated for it.
The Marketplace is the way aspiring bot creators, like AI model creators, can sell and license the various components they have built to others who need the services. While the University of Washington students who built the winning AI for Amazon were probably thrilled with their $500,000 check, they would probably be more thrilled to get a small royalty on every interaction their CUI has with Alexa’s users in perpetuity.
Finally, the SEED token is the mechanism through which bot creators and data owners (you and I) are compensated for the value created inside the network.
To round it out, let’s come back to Alice. She has not only built an AI for home energy use, she has built a bot that will periodically ask you, “Hey, are you feeling hot or cold in your house right now?” When you answer, you are feeding data into the AI and into the AI repository. That’s your data. Why shouldn’t you be compensated for it? After all, it makes the AI better and enriches the data repository. SEED says you should, and it secures your asset rights in the blockchain.
When all is said and done, SEED will offer you better protection for the data you offer and greater confidence in the authenticity and reputation of the bot with which you are interacting.
The promise of blockchain-based AI
Blockchain-based AI projects are still in very early development, and the big data kings have a huge advantage, but so did the Atlanta Falcons at halftime of last year’s Super Bowl.
As blockchains drive into the mainstream, we will see more and more data hitting decentralized marketplaces and exchanges. As people realize the value their personal data has, along with the opportunities to monetize it, and as networks like SEED, SingularityNet, and Ocean mature, we will see a tipping point in the evolution of big data, moving from a closed, siloed phenomenon to open systems where the creators of data are more fairly rewarded for their contributions.
It is too early to tell which protocols will be the winners and whether these three first movers I’ve pointed to will remain in the lead or lose out to the next wave of fast-followers.
The only clear thing is that the winners will be the developers and consumers whose data and intellectual property will be rewarded and whose experiences will be protected from bad or manipulating actors by open, transparent systems.
Jeremy Epstein is CEO of Never Stop Marketing and author of The CMO Primer for the Blockchain World. He currently works with startups in the blockchain and decentralization space, including OpenBazaar, IOTA, and Zcash.
Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor
0 notes
nancydhooper · 7 years
Text
New York City Takes on Algorithmic Discrimination
The city will create a task force to review its agencies’ use of algorithms and the policy issues they implicate.
Invisible algorithms increasingly shape the world we live in, and not always for the better. Unfortunately, few mechanisms are in place to ensure they’re not causing more harm than good.
That might finally be changing: A first-in-the-nation bill, passed yesterday in New York City, offers a way to help ensure the computer codes that governments use to make decisions are serving justice rather than inequality.
Computer algorithms are a series of steps or instructions designed to perform a specific task or solve a particular problem. Algorithms inform decisions that affect many aspects of society. These days, they can determine which school a child can attend, whether a person will be offered credit from a bank, what products are advertised to consumer, and whether someone will receive an interview for a job. Government officials also use them to predict where crimes will take place, who is likely to commit a crime and whether someone should be allowed out of jail on bail.
Algorithms are often presumed to be objective, infallible, and unbiased. In fact, they are highly vulnerable to human bias. And when algorithms are flawed, they can have serious consequences.
Just recently, a highly controversial DNA testing technique used by New York City’s medical examiner put thousands of criminal cases in jeopardy. Flawed code can also further entrench systemic inequalities. The algorithms used in facial recognition technology, for example, have been shown to be less accurate on Black people, women, and juveniles, putting innocent people at risk of being labeled crime suspects. And a ProPublica study has found that tools designed to determine the likelihood of future criminal activity made incorrect predictions that were biased against Black people. These tools are used to make bail and sentencing decisions, replicating the racism in the criminal justice system under a guise of technological neutrality.
But even when we know an algorithm is racist, it’s not so easy to understand why. That’s in part because algorithms are usually kept secret. In some cases, they are deemed proprietary by the companies that created them, who often fight tooth and nail to prevent the public from accessing the source code behind them. That secrecy makes it impossible to fix broken algorithms.
The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.
The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.
The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.
These are extremely thorny questions, and as a result, there are some things left unanswered in bill. It doesn’t spell out, for example, whether the task force will require all source code underlying algorithms to be made public or if disclosing source code will depend on the algorithm and its context. While we believe strongly that allowing outside researchers to examine and test algorithms is key to strengthening these systems, the task force is charged with the responsibility of recommending the right approach. Similarly, the bill leaves it to the task force to determine when an algorithm disproportionately harms a particular group of New Yorkers — based upon race, religion, gender, or a number of other factors. Because experts continue to debate this difficult issue, rigorous and thoughtful work by the task force will be crucial to protecting New Yorkers’ rights.
The New York Civil Liberties Union testified in support of an earlier version of the bill in October, but we will be watching to see the exact makeup of the task force, what recommendations are advanced, and whether de Blasio acts on them. We will also be monitoring to make sure the task force gets all of the necessary details it needs from the agencies.
Algorithms are not inherently evil. They have the potential to greatly benefit us, and they are only likely to become more ubiquitous. But without transparency and a clear plan to address their flaws, they might do more harm than good.
from RSSMix.com Mix ID 8247012 https://www.aclu.org/blog/privacy-technology/surveillance-technologies/new-york-city-takes-algorithmic-discrimination via http://www.rssmix.com/
0 notes
Text
Co-op Digital newsletter 16 Oct 2017: Retail adapting, maintaining code and fixing values, pretend to be an AI
Hello, this is the Co-op Digital newsletter. Send us feedback at [email protected].
Tumblr media
[Image: ElSaid et al]
.
Futuretail
An interesting long read about how retail is changing as it adapts to the challenge of the internet. Cheaper discounters like Lidl. Staff-less shops (“All of this means it only takes four members of [BingoBox] staff to run 40 stores”). Integrating retail and online experiences (Alibaba’s Hema). Self-driving shops.
.
Maintaining code, maintaining values
Google shrank its translation code by a factor of 1,000 by replacing most of it with machine learning. At first read this is a good thing, because a smaller codebase is probably more efficient and easier to maintain. But perhaps bugs or biases are *harder* to find and fix in machine learning systems because by definition you don’t know exactly how your ML black box works?
If you ran an organisation and needed to change its values, you wouldn’t write a new Important Statement of Our Values. Culture and values are about people and behaviour, not documents. You’d need to retrain (or maybe replace) the people with the bad values, and make sure you hired people with the desired values. What’s the equivalent for algorithms and AIs? If you discover your AI has biases, do you tear it down and start again? Retrain it with new, unbiased data? Related: The Seven Deadly Sins of AI Predictions.
.
Pretend to be an AI!
Google's new browser experiment lets you learn about basic AI. But this is more fun: Paperclips - a “clicker” game based on an AI thought experiment. Paperclips is so moreish and popular that the website might not work, so here’s a description (spoilers!): you click buttons and numbers go up, making you feel good, and every so often there are new buttons to click, and eventually an existential abyss. It’s quite fun though it slightly makes you feel like you’ve been duped into being the treadmill in a bitcoin mining scan. Games like this provide intermittent variable rewards, and the same psychological mechanism is what keep people checking their mobiles and playing Facebook.
.
Social media and politics
The problem of Russian meddling on Google and Facebook is much greater than has been previously revealed.
Trump digital director says “Twitter is how [Trump] talked to the people” but Facebook staffers helped with targeted advertising. “Among other services, Facebook’s elections advertising allows campaigns to take lists of registered voters drawn from public records and find those people on Facebook."
Ofcom chair’s personal view is that Facebook, Google and others are publishers rather than neutral technology platforms, a hint that she thinks they should be regulated.
.
Whole Foods may eat Amazon Fresh (but Prime rules all)
Amazon says that Fresh may shut down as a result of the Whole Foods deal, which perhaps simply means that it’ll be rebadged as Whole Foods. But Prime goes from strength to strength: “We are going to try to do a lot to make Prime really valuable for when you’re shopping at Whole Foods [...] make Amazon Prime the customer rewards program at Whole Foods Market.”
Prime is becoming more family friendly: monthly payments for students, shopping support for teens on parental Prime accounts.
.
Corbyn: what if Uber were co-operatively run?
Jeremy Corbyn: “Imagine an Uber run co-operatively by their drivers, collectively controlling their futures, with profits shared or re-invested.” Of course, Uber actually runs at a loss (investors have to keep putting money in so Uber can offer rides at below-cost as part of its plan to own a big percentage of All Future Journeys Taken), so the snarky comeback is asking whether a co-op would be happy to keep pouring money in to achieve Uber’s goals. Though of course a co-operative Uber might not have VC-economics driving it forward, and so might have different goals.
.
In brief
Hello Fresh planning its IPO with 1.3m active customers. Customer retention has been a challenge for a few of the meal delivery companies, so their choice of a punchy definition of “active” is interesting: “the number of uniquely identified customers who received at least one box within the last 13 weeks, as of June 2017 (including [...] customers who ordered during the relevant period but discontinued their orders)”.
Amazon’s experimenting with delivery all the way to your fridge - a couple of weeks ago, we talked about Walmart doing the same.
Shell is buying an electric charging network - gradual fossil fuel divestment.
Transport for London is considering selling its expertise with open data.
Smart watch sensors are getting good enough for life-saving monitoring.
A year ago in this newsletter: Ocado was using Machine Learning to triage customer support for its 0.5m active customer.
Last week: Ghostface Killah’s Cream cryptocurrency Initial Coin Offering (the ICO we’re all waiting for is Ol’ Dirty Blockchain). This week technology-shy football manager Harry Redknapp is recommending one.
.
CoopDigital news
Steve Foreshew-Cain: the Funeralcare service exits beta and Design Manchester has begun.
Co-op Digital talks service design at Design Manchester.
We held a massive retro and this is what we learnt - retrospectives for 50-people teams need more structure and planning than a simple What to keep doing/stop doing/change? meeting.
Events
Federation beta Tue 17 Oct 1pm at Federation House 1st floor.
Membership Wed 18 Oct 2pm at Federation House 6th floor.
Food Leading the Way - Thu 19 Oct 11am at at 1 Angel Square 4th Floor Blue Zone.
Co-op platform - Wed 25 Oct 11.30am at 1 Angel Square 8th floor.
Location services - Wed 25 Oct 3pm at Federation House 6th floor.Food Leading the Way - Thu 26 Oct 11am at 1 Angel Square 4th Floor Blue Zone.
.
Thanks for reading. If you want to find out more about Co-op Digital, follow us @CoopDigital on Twitter and read the Co-op Digital Blog.
0 notes
Text
Designing Tips For The Clever Web Designer
Website design can be quite intimidating and mysterious art to those that aren’t experienced with it.However, if you take the time to learn more about it, you will realize that it is really not so difficult.
Make sure there is a prominent tagline that shows up well on your site This will let people know what your site is about. This is important since the visitor a site within eight minutes.
Search Box
Include search elements that allows visitors to search within your website content. A search box lets the visitor easily a specific piece of information on your site. If your site is not equipped with one, they may leave the site for one that allows a search. Always put the search box near the right page’s top because people will look for it there.
Keep page sizes to a minimum. Users with slower Internet connections may decide that the wait is not worth it if your site loads slowly. You don’t want your visitors waiting for a page to load because they may just end up leaving.
This makes your website easier to understand for both visitors and facilitate readability by the search engines.
Free Tools
You should utilize free software to help set up your website. Many people believe that expensive software is the only way to get things done, but there are multitudes of free tools available to help you get started, there are currently numerous excellent free tools on the market that help you to develop a very professional looking website. You just need to do a little to locate the free tools that best for you.
You want to set up some way that users can submit feedback to you about your website. If your visitors feel like they are a part of your site, they will be return viewers.
Test your site early on and test it frequently. You have to be sure you’re working on how users interact and use your website as a reader would in usability tests early in the basic layout of it. Continue to test and expand your website.
Hosting your own website might not a good idea. Design the site, or most of the site yourself, so that you do not have to worry about its security and safety.
Find your targeted audience what they think of your site. This provides you better focus with site design and in choosing features to include. Taking the advice from your target visitors will help your design to be successful.
You will want every person who visits your website to see this text in bold, easy to read text so that the viewer immediately sees them when they’re following links.
Have another person constantly test your website out for functionality through the entire design process. Every time you add a new feature, a neutral viewer should give you their opinion. You might not think much of a video that loads slow, but others might think differently. Always be sure you’re seeking outside and unbiased opinions.
You should create a visual sitemap so that you can plan ahead accurately. A visual site map will show you to precisely watch over the structure of your website. From there, you’ll be able to identify any areas that need improvement, or have yet to see more work being done with them. Nothing compares to having a visual overview of your project.
Think like you’re an artist when designing your website.This means you should be prepared to receive inspiration as it occurs.If you’re eating out and an idea comes to you, grab your iPad and record the idea. If you come up with a design idea while you are at work, call and leave yourself a message for yourself on your phone, so that you can refer to it later.
Having unused space (white space) on your site may be a really good design feature, so do not make the mistake in thinking that you website should be packed all the way.
Utilizing free stock images can help you big bucks. Use the money you saved in other parts of your website’s design.
Make sure you use of a descriptive title for your website. You’ll see just how common the error this is. It is essential that your site has a title. Search engines use it as part of their algorithms when it comes to search engine optimization algorithms.
Hopefully, after reviewing this information, website design looks a little less forbidding to you. As your learn more about web page design, creating a website will not seem so difficult. If you use the ideas from this article, you can have your site going in no time.
The post Designing Tips For The Clever Web Designer appeared first on BirminghamWebDesignAgency.co.uk Expert Website Developers.
from BirminghamWebDesignAgency.co.uk Expert Website Developers http://www.birminghamwebdesignagency.co.uk/designing-tips-for-the-clever-web-designer/
0 notes