#there is no intrinsic moral value to the use of AI because the AI is not a conscious thing
Explore tagged Tumblr posts
magebird · 2 years ago
Text
Sometimes I feel like the discourse about AI art misses the actual point of why it’s not a good tool to use.
“AI art isn’t ‘real’ art.” —> opinion-based, echoes the same false commentary about digital art in general, just ends up in a ‘if you can’t make your own store-bought is fine’ conversation, implies that if art isn’t done a certain way it lacks some moral/ethical value, relies on the emotional component of what art is considered “real” or not which is wildly subjective
“AI art steals from existing artists without credit.” —> fact-based, highlights the actual damage of the tool, isn’t relying on an emotional plea, can actually lead to legally stopping overuse of AI tools and/or the development of AI tools that don’t have this problem, doesn’t get bogged down in the ‘but what if they caaaaan’t make art some other way’ argument
Like I get that people who don’t give a shit about plagiarism aren’t going to be swayed, but they weren’t going to be swayed by the first argument either. And the argument of “oh well AI art can’t do hands/isn’t as good/can’t do this thing I have decided indicates True Human Creativity” will eventually erode since… the AI tools are getting better and will be able to emulate that in time. It just gets me annoyed when the argument is trying to base itself on “oh this isn’t GOOD art” when AI does produce interesting and appealing images and the argument worth having is much more about the intrinsic value of artists than the perceived value of the works that are produced.
63 notes · View notes
existentialcatholic · 8 months ago
Text
Tumblr media
The above quote from Martin Luther King, Jr., points out an alarming trend in human behavior: specifically, that matters of right and wrong have become a matter of majority rule. This phenomenon is natural. Psychological studies have shown that the existence of litter in an environment predicts the littering of other individuals. In a generation of AI use, students have increasingly used AI to plagiarize assignments and are more likely to do so when they know that other students are doing it. On the most extreme level, media portrayals of abortions as an option frequently needed and taken can influence the media consumer to agree that abortions should remain widely available.
Catholic theology defies the societal trend of morality becoming a decision of the majority. As Catholics, we maintain that moral absolutes exist and rely on these absolutes, as given to us in the Decalogue (Ten Commandments) and analyzed further in Church teachings. Moral absolutes specify “intrinsically evil acts” and point to what is right by indicating what actions are wrong. In this post, I will answer why moral absolutes are important for Catholic theology. I will also examine why some people reject the idea of moral absolutes, and why this rejection cannot be maintained consistently.
Why are moral absolutes important for Catholic morality and why do some people reject the idea of moral absolutes?
Catholic theology recognizes activity as “morally good when it attests to and expresses the voluntary ordering of the person to his ultimate end and the conformity of a concrete action with the human good as it is acknowledged in its truth by reason” (VS 72). This quote from Veritatis Splendor tells us several features of the Catholic understanding of morality. First, moral good is voluntary. Without the freedom to act, there is no morality. Second, moral good is aligned with a person’s ultimate end. In Catholicism, we understand this ultimate end to be union with G-d. Moral actions contribute to our journey toward this end. Third, moral good consists in concrete actions. In other words, morality is a lived experience and not just an intellectual exercise. Fourth, moral good exists in conformity with the value of reason. When we perform morally good actions, our reason and our will align in pursuit of the good. With a well-formed reason, doing the good makes sense.
In addition to recognizing, encouraging, and applauding morally good activity, Catholic theology recognizes and condemns morally bad activity through moral absolutes. Moral absolutes are one aspect of the Catholic moral framework that contribute to moral good. They provide negative definitions of the tenets of Catholic morality; that is, they tell us what is right by telling us what not to do in order to achieve the right and the good. Though negative, moral absolutes “allow human persons to keep themselves open to be fully the beings they are meant to be” (May, 162).
Moral Absolutes and Catholic Morality
May defines moral absolutes as “moral norms identifying certain types of action, which are possible objects of human choice, as always morally bad, and specifying these types of action without employing in their description any morally evaluative terms” (May, 142). They prohibit “acts which, per se and in themselves, independently of circumstances, are always seriously wrong by reason of their object” (RP, 17). Moral absolutes are important for Catholic morality because all judgments require a standard, and moral absolutes provide a standard for the judgments of Catholic morality. Moreover, the absolutes of Catholic morality have a Divine source, which provides secure authority for its teachings.
Catholic theology has moral absolutes because moral absolutes protect and promote what is good. They do so because moral absolutes function as standards of how failure to achieve moral good looks. Like danger signs, they tell us which actions and spiritual “places” or states to avoid. According to May, “They remind us that some kinds of human choices and actions, although responsive to some aspects of human good, make us persons whose hearts are closed to the full range of human goods and to the persons in whom these goods are meant to exist” (May, 162).
Conscience relies on the existence of moral absolutes. One definition of conscience is “one’s personal awareness of basic moral principles or truths” (May, 59). This awareness, called synderesis in the medieval tradition, refers to “our habitual awareness of the first principles of practical reasoning and of morality” (May, 59). Synderesis requires that principles of practical reasoning and morality exist in the first place. However, another level of conscience exists which refers to “mode of self-awareness whereby we are aware of ourselves as moral beings, summoned to give to ourselves the dignity to which we are called as intelligent and free beings” (May, 60). On this level as well, which tradition has referred to as conscientia, we require moral absolutes. Moral absolutes benefit conscientia by showing the standard to which we are called. Avoid lying to others or harming them. Do not dishonor G-d or one’s neighbor.
On the Rejection of Moral Absolutes
People who reject moral absolutes may fall into the camp of teleological ethical theory, which includes proportionalism and consequentialism. The proportionalist would weigh the “good” and “bad” effects of a moral choice and judge as right any moral decision that the actor perceived as producing more “good” effects than “bad.” The consequentialist would judge an act as right that had the relatively “best” consequences, no matter how one reached those consequences. Both of these moral theologies are called “teleological” because proponents place all focus and emphasis on the end, or telos, of human action.
A charitable proposal for why people may reject moral absolutes is because they get lost in the details of moral situations. For instance, committing credit card fraud is wrong. However, the reasons that one commits it or the details of why someone makes the decision could lead someone to call the action right. One could easily identify as wrong someone who commits credit card fraud to buy the newest smartphone. Committing said fraud to feed oneself or one’s children is still wrong, but the proportionalist would argue that the good of feeding someone outweighs the wrong of credit card fraud. The consequentialist would argue that the good end justifies the evil means.
To look at it from a simpler point of view, people may reject moral absolutes because they want to rationalize actions that are wrong. For instance, I used to be pro-choice. I took a teological viewpoint and argued that allowing free access to abortion would produce the most beneficial consequences for those who were “in need” of abortion, be it due to financial, health, or relational reasons. As a pro-choicer, I argued erroneously that taking the life of an infant through abortion was a justifiable means to avoiding poverty, the potential negative health consequences of pregnancy, and the relational vulnerability of being a mother who had to take care of a newborn (especially for survivors of rape and incest). I rightly understood that extending permission to abort these pregnancies meant doing so for potentially all pregnancies, as well as all reasons to end those pregnancies. Even as the examples in my arguments did not necessarily require abortions, I knew that the emotional charge of the examples gave me the best chance at convincing someone to allow exceptions. As soon as I got someone to allow those exceptions, I would accuse the person of opposing abortion situationally, not on principle, and argue that there was no longer reason to restrict abortion on principle. I knew and know that abortion is wrong, but I went through this exercise in mental gymnastics to convince myself that it was excusable. Now, however, I know and acknowledge the constancy of moral absolutes.
Conclusion
As I stated above, moral absolutes are necessary for this framework of morality because absolutes give the judgments of Catholic morality their standard. As Canavan states, “if there are no absolutes, reasoning collapses into incoherence and yields no conclusions” (Canavan, 93). Without the standard of morality that the Decalogue provides, the claims of Catholic morality hold no more sway than the teachings of other ethical systems. The high standards set by Catholic morality, which we can only reach with the help of grace, would repel many from the ethical system. However, with the established moral absolutes that Catholic morality sets forward, the individual can value and strive to maintain the standards for behavior that the framework sets.
Moral absolutes help us understand our ultimate end of union with G-d in heaven. For one to achieve this union with our Creator, it stands to reason that one must exist in accordance with His plan. After all, the only way to become fit for union with Him is to become like Him. Recognizing the validity of moral absolutes is a vital part of living in accordance with G-d’s plan because appreciating and respecting His work in the universe involves acknowledging and following the laws that He put in place for its functioning. These laws are explained well in the Decalogue but spread out to further applications and specifications elsewhere in Church teaching.
-Esther
---
Canavan, Francis. “A Horror of the Absolute.” The Human Life Review 23, no. 1 (Winter 1997): 91-97.
John Paul II. Reconciliation and Penance. December 2, 1984. The Holy See. https://www.vatican.va/content/john-paul-ii/en/apost_exhortations/documents/hf_jp-ii_exh_02121984_reconciliatio-et-paenitentia.html.
John Paul II. Veritatis Splendor. August 6, 1993. The Holy See. https://www.vatican.va/content/john-paul-ii/en/encyclicals/documents/hf_jp-ii_enc_06081993_veritatis-splendor.html.
May, William E. An Introduction to Moral Theology. Second edition. Huntington, IN: Our Sunday Visitor, 1994.
4 notes · View notes
maxksx · 2 years ago
Text
Well, that is something I would say, but honestly, I think we could pause on that claim and just take a step back and say that this question [… is] about immanent and transcendent impulses. And the question is, how much can you do with immanent impulses? It seems to me a very crucial discussion in all of these domains between people who say you’re just not gonna get far enough, with immanent impulses, you need some kind of transcendent claim – you need to have corporate social responsibility, you need to have friendly AI, you need to have some extrinsic structure of moral guidance on these self-perpetuating, self-augmenting processes - and on the other side there is a constituency that I think is quite small that is saying, well how far can we actually get by just building things up out of these impulses that are completely intrinsic to self-augmenting processes, that come out of the most basic type of vibrant, cybernetic arrangement and will give us a whole lot of stuff, will give us impulses […] we might not be happy with [what we end up with] but it’s certainly not the case that you have some wicked fact-value distinction that says values have to be ported in from outside.
You’re gonna get values coming from the process just because it is a self-augmenting, self-cultivating, self-perpetuaing process, it must have a set of consistent parametres that, on the AI side, we now can call basic AI drives or ‘Omohundro drives’. So that’s the fundamental topic that I’m willing to put out here now.
https://twitter.com/wkqrlxfrwtku/status/1665803669814784000
4 notes · View notes
somnilogical · 5 years ago
Text
modular "ethics":
a wrong and two rights make a right
<<I've been known to cause outrage by suggesting that people who really care about something shouldn't have romantic relationships. Think what would happen if I dared to suggest that those people should also seriously consider getting castrated. That would be crazy! And who am I to suggest that basically everyone claiming to be doing good is faking it? Then people would feel bad about themselves. We can't have that!>>
https://squirrelinhell.blogspot.com/2018/02/men-have-women-are.html
previously i talked about an infohazard about altruism that seemed to fuck with grognor. it feels useful to pass by the dead and look at their lives and choices.
i dont think that castrating yourself is a good intervention for doing stuff you care about, like this is patchwork constraints for an unaligned optimizer. if you arent altruistically aligned from core values, castrating yourself wont make you more aligned.
the "altruists" having babies thing is actual insane and pasek is right about that. pretty much all of society will try and gaslight you about this the way sometimes people are gaslit about "i need to have sex with lots of attractive fems to keep up my moral so i can do super good stuff afterwards.". like if people want to do good for the world it will flow out as a continuous expression of value not some brent dill kind of deal that institutions like CFAR accepted until there was too much social pressure for them to maintain this facade.
the entire premise that morality is this modular thing and you can help set the utility function of an FAI while being a terrible person, is wrong. yet organizations like CFAR keep thinking it will work out for them:
<<We believe that Brent is fundamentally oriented towards helping people grow to be the best versions of themselves. In this way he is aligned with CFAR’s goals and strategy and should be seen as an ally.
  In particular, Brent is quite good at breaking out of standard social frames and making use of unconventional techniques and strategies. This includes things that have Chesterton’s fences attached, such as drug use, weird storytelling, etc. A lot of his aesthetic is dark, and this sometimes makes him come across as evil or machiavellian.
  Brent also embodies a rare kind of agency and sense of heroic responsibility. This has caused him to take the lead in certain events and be an important community hub and driver. The flip side of this is that because Brent is deeply insecure, he has to constantly fight urges to seize power and protect himself. It often takes costly signalling for him to trust that someone is an ally, and even then it’s shaky.
  Brent is a controversial figure, and disliked by many. This has led to him being attacked by many and held to a higher standard than most. In these ways his feelings of insecurity are justified. He also has had a hard life, including a traumatic childhood. Much of the reason people don’t like him comes from a kind of intuition or aesthetic feeling, rather than his actions per se.
  Brent’s attraction to women (in the opinion of the council) sometimes interferes with his good judgement. Brent knows that his judgement is sometimes flawed, and has often sought the help of others to check his actions. Whether or not this kind of social binding is successful is not obvious.>>
https://pastebin.com/fzwYfDNq
<<AnnaSalamon 2/6/09, 5:54 AM
Aleksei, I don’t know what you think about the current existential risks situation, but that situation changed me in the direction of your comment. I used to think that to have a good impact on the world, you had to be an intrinsically good person. I used to think that the day to day manner in which I treated the people around me, the details of my motives and self-knowledge, etc. just naturally served as an indicator for the positive impact I did or didn’t have on global goodness.
(It was a dumb thing to think, maintained by an elaborate network of rationalizations that I thought of as virtuous, much the way many people think of their political “beliefs”/clothes as virtuous. My beliefs were also maintained by not bothering to take an actually careful look either at global catastrophic risks or even at the details of e.g. global poverty. But my impression is that it’s fairly common to just suppose that our intuitive moral self-evaluations (or others’ evaluations of how good of people we are) map tolerably well onto actual good consequences.)
Anyhow: now, it looks to me as though most of those “good people”, living intrinsically worthwhile lives, aren’t contributing squat to global goodness compared to what they could contribute if they spent even a small fraction of their time/money on a serious attempt to shut up and multiply. The network of moral intuitions I grew up in is… not exactly worthless; it does help with intrinsically worthwhile lives, and, more to the point, with the details of how to actually build the kinds of reasonable human relationships that you need for parts of the “shut up and multiply”-motivated efforts to work… but, for most people, it’s basically not very connected to how much good they do or don’t do in the world. If you like, this is good news: for a ridiculously small sum of effort (e.g., a $500 donation to SIAI; the earning power of seven ten-thousandths of your life if you earn the US minimum wage), you can do more expected-good than perhaps 99.9% of Earth’s population. (You may be able to do still more expected-good by taking that time and thinking carefully about what most impacts global goodness and whether anyone’s doing it.)>>
https://www.greaterwrong.com/posts/4pov2tL6SEC23wrkq/epilogue-atonement-8-8
like opposing this isnt self-denying moral aestheticism or a signalling game of how good you can look (credibly signalling virtue is actually a good thing, i wish more people did it by for instance demonstrating how they win in a way that wouldnt work if they werent aligned. whose power seeded from their alignment.). its like... the alternative where people do things that it makes no sense for an altruist to do and then say that when they go to their day jobs they are super duper altruistic they swear; compartmentalizing in this way ...doesnt actually work.
people who want to obscure what altruism looks like will claim that this is moving around a social schelling point for who is to be ostracized. and that altruism as a characteristic of a brain isnt a cluster-in-reality that you can talk about. because it will be coopted by malicious actors as a laser to unjustly zap people with. these people are wrong.
both EA and CFAR are premised on some sort of CDT modular morality working. it is actually pretending to do CDT optimization because like with brent at each timestep they are pretending to think "how can we optimize utility moving forward?" (really i suspect they are just straight up mindcontrolled by brent, finding ways to serve their master because they used force and the people at CFAR were bad at decision theory) instead of seeking to be agents such that brent when brents plans to predate on people ran through them, he would model it as more trouble than it was worth and wouldnt do this in the first place.
CFAR and EA will do things like allowing someone to predate on women because they are "insightful" or creating a social reality where people with genetic biases who personally devote massive amounts of time and money to babies who happen to be genetically related to them and then in their day job act "altruistically". as long as it all adds up to net positive, its okay right?
but thats not how it works and structures built off of this are utterly insufficient to bring eutopia to sentient life. in just the same way that "scientists" who when they arent at their day jobs are theists are an utterly insufficient to bring eutopia to sentient life.
<<Maybe we can beat the proverb—be rational in our personal lives, not just our professional lives. We shouldn’t let a mere proverb stop us: “A witty saying proves nothing,” as Voltaire said. Maybe we can do better, if we study enough probability theory to know why the rules work, and enough experimental psychology to see how they apply in real-world cases—if we can learn to look at the water. An ambition like that lacks the comfortable modesty of being able to confess that, outside your specialty, you’re no better than anyone else. But if our theories of rationality don’t generalize to everyday life, we’re doing something wrong. It’s not a different universe inside and outside the laboratory.>>
--
to save the world it doesnt help to castrate yourself and make extra super sure not to have babies. people's values are already what they are, their choices have already been made. these sort of ad-hoc patches are what wrangling an unaligned agent looks like. and the output of an unaligned agent with a bunch of patches, isnt worth much. would you delegate important tasks to an unaligned AI that was patched up after each time it gave a bad output?
it does mean that if after they know about the world and what they can do, people still say that they specifically should have babies, i mark them as having a kind of damage and route around them.
someone not having babies doesnt automatically mark them as someone id pour optimization energy into expecting it to combine towards good ends. the metrics i use are cryptographically secure from being goodharted. so i can talk openly about traits i use to discern between people without worrying about people reading about this and using it to gum up my epistemics.
26 notes · View notes
pazodetrasalba · 2 years ago
Text
Charm and Strange - from quarks to thoughts
Tumblr media
Dear Caroline:
One of my habits, and a useful one for my profession (as it gives me provocative ideas to get the students talking) is to try to read at least one book a year that is really, really far from the things I believe in and cherish, so it's sort of a self-induced trigger, but also a test of my beliefs (Tyrion: A mind needs books like a sword needs a whetstone, if it is to keep its edge). I gather that you are one of those persons that really enjoys intellectual jousting (you mentioned as much in a post were you explained the pleasures of the rationalist community), but you also sharpen the edge somewhat by expecting yourself and others to follow-up on their rational conclusions.
This comes to mind because reading what you posted above, which I interpreted as you assimilation of the Rationalist Jedi Mind Tricks that have allowed you to internalize the very weird and frequently shocking and unpalatable sequiturs of Utilitarianism / Rationalism / EA -at least that is how they feel from my perspective. Stuff like poly, obsessing with AI, conflating practical with moral judgements, of the 'harvesting organs from a healthy person and any other ends-justify means' type, or rigorously trying to quantify the value of human and animal life. I have been reading some articles about Peter Singer's thought, as he seems to be the main intellectual referent for EA, and find myself so at odds with much of it that he certainly deserves a place in my 'triggering reads' booklist. I would be grateful for a specific recommendation on any one of his volumes.
It would take me too long to nitpick some of my disagreements, but I imagine the most fundamental one stems from my rejection of his axiomatic assumption that the good of any one individual is of no more importance than the good of any other. I mean, this might be obvious from a certain abstract perspective (humans are generally equal in mental capacity and basic worth, and ought to enjoy the same set of basic rights) but I feel is at the same time deeply morally wrong, and that some individuals (and precisely because they are seen as individuals, not as indistinct cogs of an abstract Totality) can legitimately stake a greater claim to our moral support: our family, our friends, our neighbors, those who we can interact with at a personal and human level. I've developed a certain scepticism for 'love for humanity when it is not concrete and grounded, probably as an afterthought of what some of the unpleasant consequences this can effect. Here rings very true for me that Kantian maxim of always treating others as ends in themselves, and not as means, each with intrinsic value and dignity and irreducible to some number or classification algorithm (or in Granny Weatherwax's words, “Sin, young man, is when you treat people like things. Including yourself. That's what sin is.”) .
A very interesting (and ancient!) take on this which I have read about is that between the philosophical confrontation between Confucianism and Mohism in 5th century BCE China, where the Mohist proclamation of indistinct universal love as their key idea and policy goes head to head with the other sides's beliefs in distinctions. i think you would enjoy the Mozi greatly. I recently made a review of it which you can watch if you feel piqued. Whether it be Mozi's, the Gospel's or Utilitarianism's demand to love everybody in the same way, and dream on how wonderful the world would be if such a theory become a fact, I can only scratch my head and ponder at the impossibility -and from my personal stance, even undesirability- of such a world.
Quote:
And it is love that opens our eyes to the true source of the worth of persons: their inner preciousness, unrepeatability, and uniqueness. It is precisely a glimpse of the unrepeatable uniqueness of another human person that inspires love. Once this glimpse is achieved and love springs forth in the soul — as it does like a surprising gift — that love then has the remarkable power of allowing you to see more clearly and deeply the unique preciousness, as well as the humanity, of the person you love. That vision in turn inspires more love.
Peter J. Colosi
0 notes
warsofasoiaf · 7 years ago
Note
Any chance you want to give us some analysis of the psychologist and chaplain on the Unity, Sister Miriam? Reynold did a pretty solid job making her more than some crazy protestant fundamentalist, and some of her "We Must Dissent" critiques on how technology is being applied have a tinge of your own cynicism, although filtered through the lens of an extreme evangelical.
Miriam is one controversial character. On the surface, she’s a Bible-thumping fundamentalist, her preferred government choice is even called “Fundamentalist,” her facial expression even seems to scream: “Jesus is watching you.” Yet this does the writers a deep disservice, as evidenced by the other factions, they are capable of writing fully-realized characters and philosophies.
We don’t get an actual declaration of what sect of Christianity Miriam, or even what ecumenical form Christianity took at the time the Unity took off, all we know is that apparently the United States transformed into a theocracy at some point, so it’s probably some sect of Protestantism. Truth be told, it doesn’t actually matter whether we do or not, Miriam expresses herself well enough through her own quotes. She tends to express herself primarily in two ways, fierce condemnation of reckless progress without regard to morality, and a softer, comforting tone likely given among her own flock. This is key for Miriam, she cares about her people in a way that faction heads do not. Morgan, Santiago, and Zakharov see the virtue of their progress as proof of its intrinsic morality, Yang is nihilistic, Deidre is rushing off so much with her plants and fungus that she’s losing touch with her humanity, and Lal is bureaucratic, impersonal, and more than a bit hypocritical, concerned with style over substance. Miriam wants to ensure that the spiritual wellbeing of her citizens is protected, and she truly practices what she preaches: “And so we return again to the holy void. Some say this is simply our destiny, but I would have you remember always that the void EXISTS, just as surely as you or I. Is nothingness any less a miracle than substance?” The fate of humanity is ever precarious and Miriam knows that people can be pushed close to the breaking point, and she is there with the balm of Gilead to get people to feel better. Her people believe in her too, they’re more than willing to support a large military before feeling discontent and fight hard to accomplish her goals. More than any other faction, Miriam expresses sympathy for the downtrodden. On the other side, she is strict and condemnatory toward others who refuse her message and her AI is fairly aggressive. In this sense she still has that sort of militant preacher vibe, but it’s to the credit of the writers that they took this archetype and fleshed it out. Like every other faction, Miriam has her strengths and her weaknesses, things she can be lauded for and criticized.
One of the big criticisms levelled at Miriam is that she is either a Luddite, a technophobe, or suspicious of science itself. She is none of these, her approach to research stems from a true sense of social conservatism. After all, she’s a psychologist and she understands the chemical states of matter, she is both clearly educated in multiple scientific disciplines and has no intrinsic distrust of science: “Evil lurks in the datalinks as it lurked in the streets of yesteryear. But it was never the streets that were evil.” This quote suggests that she considers the datalinks evil in and of themselves, but other quotes give her a more complete picture. Look at this selection of quotes from her key work, “We Must Dissent,” her treatise castigating the technological development of the other factions: 
“Already we have turned all of our critical industries, all of our material resources, over to these… things… these lumps of silver and paste we call nanorobots. And now we propose to teach them intelligence? What, pray tell, will we do when these little homunculi awaken one day and announce that they have no further need for us?”
“And what of the immortal soul in such transactions? Can this machine transmit and reattach it as well? Or is it lost forever, leaving a soulless body to wander the world in despair?”
“Will we next create false gods to rule over us? How proud we have become, and how blind.”
Miriam is clear, she wants others to think about what they’re doing. The pursuit of progress without being cognizant of the risks and costs arguably helped contribute to the catastrophe of Earth. What things have the labtechs at the University of Planet failed to take into account, what corners did Morgan cut in pursuit of the next great product, what happens to the mental state of people in Deidre’s psychic networks? She celebrates beneficial advances in technology, she even refers to the plasma accretion process creating “new miracles,” she’s afraid of it’s misuse. That’s why she doesn’t accumulate research points in the first couple years and that’s why her research is slow. Sure, there is less funding for laboratories over churches as well, but her greatest concern is to understand how these changes will effect the psychology of her people and of the society at large. The end-game techs are in no uncertain terms terrifying. Controlled singularities, self-aware colonies, molecular reassembly, all of these things improperly considered are an extinction level event on their own, but each faction continues to roll them out, eager for gain, not knowing what next they will unleash because they lack the wisdom of restraint. Her dichotomy is best summed up in two quotes: “Beware, you who seek first and final principles, for you are trampling the garden of an angry God, and He awaits you just beyond the last theorem.” This sounds like a fire-and-brimstone street preacher, but it’s the other quote that accompanies Quantum mechanics that is exceptional: “Men in their arrogance claim to understand the nature of creation, and devise elaborate theories to describe its behavior. But always they discover in the end that God was quite a bit more clever than they thought.“ The proof is in the pudding there, while the former appears to be direct and so a bit of fiery language can be expected, this appears to be a reflection or philosophy. Mankind is flawed and refusal to accept it leads to catastrophe.
Protestantism doesn’t have a Pope or Patriarch and instead professes a universal priesthood, suggesting that Miriam’s administration is modeled with theological and secular components, influencing the other but not under the direction of a religious caste. A Democratic Miriam loosens the restrictions and military funding for greater promotion of the self through community and activity, while a Police State Miriam utilizes a state of emergency and herself as governmental head to act in preservation of her people and their souls. Miriam preserves fundamentalism, which likely strips funding from labs in the interests of devoting the majority of the social fabric to religious concerns and the totality of existence as universal believers, protecting them from foreign influence without becoming the paranoid police state of informants that a Bloodraven might promote.
Economically, Miriam forbids nothing. A Planned Economy probably is fashioned similar to Christian communialism through common ownership and shared industry, justified in sermons and enforced through a small army of bureaucratic clerks. A Free Market Miriam resembles the American South, with an emphasis on charitable giving and a strong sense of community. A Green Miriam acknowledges that resources are limited and so encourages thrift and voluntary deprivation for the sake of the community, using less so that others might have more as proscribed by Christian virtues.
Miriam forbids Knowledge as a value, and this makes sense given what is mentioned above, pursuit of knowledge for its own sake makes one heedless to its risks. If Miriam values power, she has probably come to accept the necessity of a holy war as the only way to save mankind from its own recklessness. Wealth Miriam likely focuses more upon the prosperity gospel, where good people gain money through goodness, build industry to employ others, and so on.
A Cybernetic Miriam probably has increasing automation to permit others freedom in their tasks and devote themselves more completely to other matters, but this doesn’t sound appealing to Miriam who fears the machine rising against the master. A Eudaimonic Miriam is almost certainly the one she would elect to pursue, finally creating the paradise on Planet and letting people live in goodness, good in word, good in thought, good in action, good in faith. Thought Control is again, a sinister one, the people finally rendered docile believers, where Shepherd Miriam finally has her flock, where sin is such an evil that it must be prevented at all costs, though if I had seen Miriam do this, the last thing I would say before I was invariably hauled off to the Punishment Sphere is: “And what of the immortal soul?”
Thanks for the question, TBH.
SomethingLikeALawyer, Hand of the King
19 notes · View notes
transhumanitynet · 7 years ago
Text
Obstacles to Mind Uploading
Sing The Body Electric
“Mind Uploading” is the idea that the pattern of information which constitutes your perceptual awareness, memories, personality, and all other cognitive functions can be abstracted from the brain it developed in, and “run” on a different computational substrate. In other words; that the stuff which makes you, you could in principle escape the inherent limitations of human biology… such as inevitable short-term mortality. If it is plausible, that is a profoundly powerful and transformative idea.
Of course, the uploading idea has a myriad of opponents. The vast majority are ill-informed people whose opposition relies more on instinct and straw-clutching than good arguments well supported by evidence. To be fair, the same could be said of the uploading idea’s many dilettante fans who simply like the notion without having seriously researched its plausibility. The paragraphs below offer a whirlwind tour of objections to uploading, and the degree to which they should be taken seriously.
Where to Begin? You Are Already A Machine
Human argumentation is rarely half as rational as we like to imagine it is. For a start, our estimates and judgments of whether an argument is correct are heavily dependent on context. More specifically, we are overly influenced by what are known as “frames” or “anchors”; i.e. by the initial point of reference we use to start thinking about… anything. For example, a million dollars sounds like a lot to a homeless person, and like considerably less to Bill Gates.
This is highly relevant to arguments about uploading, because people tend to begin those arguments from different starting points, depending on whether they like the idea or not. Opponents of uploading tend to start out with an implicit assumption that humans and machines are very different things, and never the twain shall meet (for one reason or another). Uploading advocates, however, will frequently argue that the human organism is already a machine of sorts, thus acting as a kind of living testimony to the possibility of intelligent, conscious machines.
The core issue tends to be a fundamental misunderstanding (albeit one that is often deliberate) over the question of what it is to be a machine. Opponents invariably define machines in terms of those artificial devices which already exist or have existed, whereas advocates focus on the underlying principles of known organisms and artifacts. In case you hadn’t guessed; I am an uploading advocate, and I believe that we are – in the deepest sense – already machines, and always have been.
Computational Power, S-Curves, & Technological Singularities
Of course, that still leaves a considerable (some would say intractable, even impossible) gulf between our current technical ability on the one hand, and the ability to intelligently alter, replicate, and improve upon our own biological machinery on the other. For a cogent, exhaustive argument for the ability of accelerating technological development to deliver on these promises, I would suggest reading “The Singularity Is Near” by Ray Kurzweil.
The basic premise of that book is that technological innovations make more innovation easier to produce, which in turns leads to the (already well observed) acceleration of change. Accelerating change leads to an exponential (rather than linear) pattern, by which we might reasonably expect to see twenty thousand years of technological innovation at the c.2000 CE rate by the end of the 21st Century. That is definitely enough innovation to bridge the kind of technical gap we’re talking about. Of course, opponents like to deny that accelerating change even exists, but their claims are increasingly hard to take seriously if you pay attention to the latest developments coming out of cutting-edge labs.
Minds, Bodies, and… Intestines?
Broadly speaking, on the technical level (i.e. leaving aside arguments that we can upload minds, but shouldn’t), there are two types of opponent argument. One is that the mind cannot be reduced to information and thus modelled. The most common version of that argument comes from religion, involves “souls” (whatever they are), and is addressed further below. The second is that the mind can be modelled in terms of information, but we are modelling the wrong information.
I would not want to dismiss that second argument too quickly. To be frank, more often than not it is perfectly on the money. It’s just that I believe we are moving closer and closer to modelling (and understanding) the right information all the time. Let’s be clear, here: The oft-heard refrain that “the mind and consciousness are complete mysteries, we have no idea how they work” are ridiculous, infantile catchphrases used only by people who are wilfully ignorant of the last twenty years of developments in cognitive neuroscience and related scientific disciplines.
AI research is littered with ridiculously simplistic assumptions from people who’ve had little or nothing to do with cognitive science or any related discipline, working on their own narrow-domain problems and then somehow assuming that their models capture the intricacies of, well… everything. The first “AI Winter” and the challenge of developing competent AI chess players was perhaps the most notable early wake-up call in that department. To cut a long story short, the moral of that story is that AI researchers have a habit of making lots of huge, terrible assumptions.
These days, it’s much harder to find a serious researcher who thinks you can abstract away most neurological processing without “throwing the baby out with the bathwater”. These days, complexity is increasingly respected and explored, which means not only not dismissing it, but also not holding it up as some magical ‘deus ex machina’ from which consciousness will emerge if we can only hook enough artificial neurons up to each other…
Anyway, such issues lead to some interesting grey areas, which are often (in my opinion) misused for the purposes of argument. For example, certain biologists have made a lot out of observed connections between the human gut microbiome and “enteric nervous system” on the one hand and cognition as a whole on the other. The research literature essentially says that human intestinal health affects our mood and other personality aspects. On the one hand, that is an entirely reasonable observation, of course. It is hardly surprising that our moods and cognitive abilities are highly sensitive to the state of the body they are instantiated in!
It is quite another thing, however, to suggest (as opponents sometimes do) that this intestinal “second brain” (so-called by popular science writers) is intrinsic to intelligence or conscious awareness, or any harder to model than any other part of the extended nervous system. You could argue up this garden path for a long time, but the basic reality can be illuminated with a simple Reductio Ad Absurdum: Do you really believe that if you could fully capture everything happening in a person’s brain but not their (personal, specific) intestines, then something fundamentally definitive about that person would be missing? If you do, then I would hazard that you have some rather, ahem, fringe notions about what information is actually processed by the enteric nervous system.
Leaping the Gap from Data to Software
Another intriguing, and yet ultimately spurious objection to uploading is to say that you can collect all the neurological data you want, but without some kind of “animating force” in the form of properly configured software then it would be for nothing. On a certain level this argument can carry some weight, but again it’s easy to take that too far.
The value of this opposition argument is inevitably correlated with the degree to which uploaders are committed to a degree of abstraction of human neural activity. Basically, we know that humans are intelligent and consciously aware. With a technology that modelled the human nervous system down to each individual atom, there is no need for software that has any “magic sauce” beyond faithfully replicating the physics of atomic interaction. Of course that would require a staggering amount of computational power to achieve if it is even possible (the jury seems to be out on that, depending upon the computational assumptions you make), so the natural temptation is to take shortcut. Just model entire molecules, neurons, neuron-clusters, brain regions… and so on. The more abstraction you rely upon, the more you have to rely upon software to bridge the gap.
That is an entirely fair point. It is not, however, any kind of argument that uploading is impossible. To the contrary, it is an argument for the establishment of the circumstantial boundaries within which uploading is possible, given sufficient available computational power.
A Final Note on Souls and Other Fictions
If you believe that you could perfectly capture every conceivable physical aspect of a person down to the atomic level, putting aside all of the technological achievement required to do such an incredible thing, and still believe that something important is being missed out, then it seems fairly safe to say that you believe in souls.
Not in some metaphorical, poetic sense, but in proper old-fashioned, literal “soul stuff” which somehow acts like a physical substance but obeys none of the laws of physics, and which people only imagine exists because they read about it in a work of fiction (and/or refuse to believe that they could be made of the same stuff as literally everything else in the observable universe).
If that is your position, then I’m afraid I only have two words for you: Grow Up.
Further Reading
AI Transcends Human Cognitive Bias http://transhumanity.net/ai-transcends-human-cognitive-bias/
Obstacles to Mind Uploading was originally published on transhumanity.net
3 notes · View notes
componentplanet · 6 years ago
Text
Google Struck a Deal to Secretly Access Health Data on Millions of Americans
America’s data privacy laws aren’t bad so much as they’re nonexistent. There’s no general federal data privacy law at all, and only a few states have attempted to pass meaningful legislation on the topic. While laws like HIPAA (Health Insurance Portability and Accountability Act) do have something to say about who is allowed to access patient medical records without the patient’s consent, it’s clear now that even this law is woefully inadequate to the privacy challenges of the 21st century.
Google has a deal with the second-largest health-care systems in the United States, Ascension, to gather and crunch data on millions of Americans across 21 states, according to the Wall Street Journal. The initiative is codenamed “Project Nightingale,” and is described as “the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry.” Amazon and Microsoft are also described as muscling into the medical industry, though apparently they have yet to strike deals quite this large.
The Data Isn’t Anonymized
The WSJ claims that the data “encompasses lab results, doctor diagnoses, and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.” Neither patients nor doctors have been notified in their inclusion in these data sets. All of this is legal under HIPAA, which allows hospitals to share data with business partners so long as the information is being used “to help the covered entity carry out its health care functions.” Apparently some employees of Ascension attempted to raise concerns about how this data was being used, but their complaints were dismissed, according to the report.
In the hypothetical universe in which Google intended to carry out this research in good faith, it would announce its efforts, accept only data from patients who opted in, conduct the difficult work of contacting all of those patients or their next of kin, pay their families for the value of the data it intended to mine from their lives, thoroughly anonymize the data, and take other various steps to establish trust when handling something as sensitive as a person’s medical data. In this fantasy, Google would also recognize that a great deal of data misuse and abuse happens because data is passed off to an endless succession of third parties and that it had a moral obligation to ensure that the valuable information it gathered would not be misused. Comfortable with its own ability to take on this responsibility, Google would publicly discuss how it protected our data.
But all of that is difficult. It’s much easier to secretly negotiate access to the information and build databases on people’s medical histories without consent. It’s easier for Ascension to ignore its own employees when they raise ethical concerns about these arrangements. It’s easier to take advantage of a loophole in federal law than to admit that this loophole is bad and needs to be closed.
Google undoubtedly has a lot of arguments about how it’s doing this for the best of reasons. That’s unsurprising. It was Google’s Larry Page who first said that medical data should be public knowledge in the first place. Larry Page, billionaire, and CEO of Alphabet, apparently cannot conceive of the idea that someone might be discriminated against if their private medical information became public knowledge. In his comments on this topic, Page has argued that there is no reason for anyone to hide this information and that he believes people do so because they are afraid of not qualifying for insurance. The idea that people might struggle to find employment or face other sorts of discrimination as a result of chronic illness or injury did not seem to have occurred to him in 2013. If it’s occurred to him since, he’s kept quiet about it.
The Lack of Disclosure Is a Problem. So Are Some of the Goals.
The WSJ takes pains to note that Google wants to build AI engines to better diagnose patients, while Ascension is looking for ways to improve outcomes and save lives. This is probably true. A lot of people are aware of how deeply broken the US healthcare model is, and how great the need for solutions is. The problems are complex because the system is incredibly complex. An AI system for effectively and quickly diagnosing patients that can deal with far-flung locations and treat or analyze patient data remotely and cheaply is an intrinsically attractive idea. People who want to be helpful go into these fields hoping to do something about their problems.
But if there’s one thing we’ve hopefully collectively learned from privacy disaster after privacy disaster, it’s that we can’t just emphasize the positive. The WSJ writes that Google is working with Ascension at no cost because it wants to build a healthcare database it can sell to other providers. Ascension, for its part, openly acknowledges that one of the goals of its program is to increase revenue from patients.
Ascension, a Catholic chain of 2,600 hospitals, doctors’ offices and other facilities, aims in part to improve patient care. It also hopes to mine data to identify additional tests that could be necessary or other ways in which the system could generate more revenue from patients, documents show. Ascension is also eager for a faster system than its existing decentralized electronic record-keeping network. (Emphasis added).
Given that the cost of interacting with the US medical system has been rising for decades, it’s appropriate to ask why it’s appropriate to adopt a new medical system on the basis of extracting higher revenue from patients. Ascension is supposedly a non-profit, religiously affiliated healthcare organization. US healthcare cost growth is out of control, and employees shoulder an ever-larger share of that burden.
It could be very reasonably argued that focusing on increased revenue per patient over the past 46 years has produced charts like the above. On top, us. Below us, everyone else.
Expensive AI-driven systems are not going to be adopted because they identify revenue opportunities with marginal value — say, targeting rich people who might like to have a little more elective plastic surgery. The push for such systems is going to happen in part because they’re good at finding new revenue sources. And over-testing is already a huge problem in the American healthcare system.
Waste, in total, is estimated to account for roughly 25 percent of all American healthcare spending. Of the estimated $760 – $935B in wasted American healthcare costs as of 2019, between $77B and $102B is estimated to be caused by either over-treatment or poor-quality treatment. It’s one of the largest single categories. This is not to say that people who need tests shouldn’t get them — of course they should — but when evaluating patients to see if they are receiving proper tests, the focus should also be on making certain tests are not performed unnecessarily. If Ascension considered the moral obligation it had not to charge people for tests they didn’t need, the WSJ does not mention it.
Now that they’ve been discovered, Google and Ascension are claiming to have had users’ best interests at heart… just not enough to tell them in advance. Google has been found to be violating the privacy of its own users in so many various ways over the years, from Google Plus to Android, it strains credulity to see these issues as individual innocent mistakes. Now we discover the company is doing it again, this time with personal medical data it ought to have no legal right to receive in the first place.
I bet they even promised it would only be shared with trusted partners. You know — like the 150+ Google employees that already have access to the personal healthcare records of tens of millions of Americans, according to the WSJ.
Now Read:
DeepMind’s StarCraft II AI Can Now Defeat 99.8 Percent of Human Players
Home Builders are Reportedly Dumping Nest After Google’s Changes
How Google Legally Profits From Massive Fraud on Its Platform (and What You Can Do About It)
from ExtremeTechExtremeTech https://www.extremetech.com/internet/301802-google-struck-a-deal-to-secretly-access-health-data-on-millions-of-americans from Blogger http://componentplanet.blogspot.com/2019/11/google-struck-deal-to-secretly-access.html
0 notes
fazeupmag-blog · 6 years ago
Text
New Post has been published on Fazeup
New Post has been published on https://www.fazeup.tk/2019/05/the-huawei-ban-isnt-simply-unhealthy-for-the-corporate-its-unhealthy-for-android-normally/
The Huawei ban isn’t simply unhealthy for the corporate, it’s unhealthy for Android normally
Opinion submit by
C. Scott Brown
Over the weekend, Google introduced it might stop all enterprise with Chinese language smartphone producer Huawei. After Google’s announcement of this Huawei ban, different firms adopted go well with in a technique or one other, together with Qualcomm, Intel, Microsoft, and even Arm.
The Huawei ban is a results of a number of elements all coming to a head when United States President Donald Trump issued an government order successfully forbidding Huawei from shopping for or promoting merchandise from U.S. firms. To atone for how this all got here to be, seek the advice of our roundup right here.
How will this have an effect on Android as an entire? Many individuals may have a look at this Huawei ban and say, “I don’t own a Huawei phone and don’t ever plan to buy one, so this has no effect on me.”
Nothing could possibly be farther from the reality. This Huawei ban impacts us all, and — if it sticks — the damaging unwanted side effects may do some critical harm to the world of Android.
This Huawei ban goes to be pricey
When Google introduced this Huawei ban, it was straightforward to assume that Huawei — not Google — was in deep trouble. In spite of everything, Google doesn’t make a lot cash from China (no less than indirectly) since Google merchandise are primarily non-existent within the nation.
Nevertheless, Google has loads of money invested in China, together with an AI analysis facility it introduced on the finish of 2017. Mission Dragonfly — Google’s supposedly shelved ambition to deliver Search again to China — is on the very least proof that the corporate has its sights set on the nation.
After this Huawei ban, although, you may wager Google isn’t going to be too welcome in China, which is able to undoubtedly throw its monetary plans right into a tizzy.
Google’s China ambitions at the moment are troubled much more than earlier than, and U.S. firms stand to lose a variety of money.
Intel, Qualcomm, Broadcom, Microsoft, and different U.S. organizations all have funds tied into Huawei indirectly. This Huawei ban signifies that money stream is getting choked which is able to damage the underside line for everybody. Earlier at the moment, we additionally discovered that Arm is pulling all enterprise away from Huawei, which could possibly be much more detrimental to the corporate’s sustainability than shedding Google.
Whereas these firms shedding income doesn’t instantly have an effect on the world of Android, it actually does not directly. When the very pillars of the Android-based smartphone business are feeling monetary stress, that impacts every little thing, normally beginning with R&D. With much less money to burn, we’ll see much less innovation, fewer releases, and better costs for Android merchandise.
May a brand new Android challenger seem?
We already know that Huawei has a “Plan B” within the works in the case of the Android working system. Rumors level to a Huawei-branded OS working Android apps natively and getting prepped to launch as early as this 12 months.
Will this new working system be any good? In all probability not, no less than at first. Android has a ten-year head begin on Huawei, to not point out the advantages of the open supply nature of Android (one thing which Huawei and China would nearly actually by no means assist). When you want proof of how probably it’s that the Huawei cellular OS shall be fairly horrible, simply have a look at the truth that we’ve by no means seen it earlier than — Huawei has relied on Android for years as a result of, frankly, it’s probably the most effective system for the job.
The gloves are off now. If this Huawei ban sticks, the corporate could have no alternative however to go all-in by itself OS. It’s not like Huawei is solely going to cease making smartphones.
If Huawei cannot use Android, it’ll use one thing else. Its smartphone enterprise is just too massive to desert.
It gained’t occur in a single day, however the Huawei OS would finally be a menace to the dominance of Android. Even when it’s a nominal menace at first, that menace will develop as a result of industrial may of Huawei mixed with the backing of China itself. That’s one thing you may’t neglect by way of all of this: Huawei and the nation of China are so intrinsically tied collectively that a struggle with one is invariably additionally a struggle with the opposite.
Editor’s Choose
New particulars emerge on Huawei’s potential plans to go ahead with out Google
The smartphone business was rocked to the core over the weekend when Google revealed it’ll stop all enterprise with telco firm Huawei. This would come with the removing of Android assist on Huawei smartphones, leaving …
Are you able to think about what it might be like if there have been a state-sponsored cellular working system getting pushed out to China’s roughly 1.four billion residents? That’s an enormous portion of the world’s inhabitants who now now not use Android. Even when the OS is horrible, it’ll solely be a matter of time earlier than it’s a steady challenger to Android.
Some may say that competitors within the cellular OS world is welcome as it’ll simply push Android to be that a lot better. Don’t neglect that there have been loads of instances over the previous 20 years the place there have been many cellular working programs duking it out: numerous Home windows programs, BlackBerry OS, Palm OS, Symbian, and many others. All of them are gone now as a result of Android noticed vast adoption at an alarmingly quick tempo — similar to we’d probably see with this Huawei OS.
Now, I’m on no account saying that this Huawei OS goes to kill off Android. I’m merely mentioning that this Huawei OS is probably going not the form of competitors that can make Android any higher.
Huawei is just too massive to fail
Assume again once more to what I stated earlier about how Huawei and China are so linked that they’re almost interchangeable. That’s, after all, one of many massive causes this ban is in impact within the first place.
Huawei is already the second-largest smartphone producer on the planet. It’s additionally already the world’s largest telecommunications firm. If you mix that form of pedigree with the forever-loyal backing of the Chinese language authorities, you’ve the very definition of “too big to fail.”
China and the U.S. have had frosty relations for a very long time. That is solely going to make that worse.
It’s a positive wager that this Huawei ban goes to be the beginning of one thing akin to a chilly warfare. China has already seen the U.S. as an enemy — possibly not an enemy at warfare, however actually as a menace. This simply pushes that dynamic even additional.
China will struggle tooth-and-nail to defend Huawei, that a lot is for certain. Will the U.S. authorities do the identical in the case of firms like Google or Qualcomm? Actually not, no less than not on the similar stage. This Huawei ban is in regards to the U.S. authorities vs. China — and that places U.S. firms in a really intimidating place within the center. When you don’t assume that can have an effect on the world of Android, you’re in for a nasty shock.
Will or not it’s value it?
It’s very straightforward to have conflicting opinions on Huawei. I believe the corporate makes some really distinctive smartphones but additionally know that it has a historical past of some extremely shady enterprise practices. I attempt to not give my cash to firms with such poor moral standing, however a few of their merchandise are fairly tempting, I’ll admit.
With that in thoughts, I’m partially OK with this Huawei ban because it feels a bit like the US standing up for the remainder of the world and saying, “No, you can’t get away with this anymore, Huawei.” In that respect, I’m a supporter of the ban.
It is onerous to take sides with this struggle, however it’s not onerous to hope there will not be an excessive amount of collateral harm.
Alternatively, the US — and the businesses therein, together with Google — don’t have the cleanest of histories both. It’s not troublesome to see some hypocrisy in banning Huawei for actions the U.S. ignores when dedicated by one among its personal.
Editor’s Choose
Must you purchase a Huawei system proper now? (Up to date)
Replace #3: Might 20, 2019 at 6:00 p.m. ET: The U.S. Commerce Division has created a brief 90-day license that restores Huawei’s capability to supply software program updates to current Huawei handsets. Learn extra right here. Authentic article: …
If the moral standing of this Huawei ban is ambiguous at finest, the query then turns into whether or not or not it will likely be value it. We’ll have to attend and see how Huawei handles this to know the reply to that query.
Will Huawei make some concessions and switch issues round as we noticed just lately with ZTE? Will Huawei give the U.S. the metaphorical finger and go its personal method, igniting commerce wars and tech wars in its wake? Will the U.S. understand the gravity of the scenario and make its personal concessions with the intention to preserve the peace? We don’t know but, however right here’s hoping Android doesn’t get completely broken within the meantime.
NEXT: Huawei and the Trump debacle: The story to date
Supply
0 notes
gta-5-cheats · 7 years ago
Text
Students confront the unethical side of tech in ‘Designing for Evil’ course
New Post has been published on http://secondcovers.com/students-confront-the-unethical-side-of-tech-in-designing-for-evil-course/
Students confront the unethical side of tech in ‘Designing for Evil’ course
(adsbygoogle = window.adsbygoogle || []).push();
.ecrsg5b0e0fff28753 margin: 5px; padding: 0px; @media screen and (min-width: 1201px) .ecrsg5b0e0fff28753 display: block; @media screen and (min-width: 993px) and (max-width: 1200px) .ecrsg5b0e0fff28753 display: block; @media screen and (min-width: 769px) and (max-width: 992px) .ecrsg5b0e0fff28753 display: block; @media screen and (min-width: 768px) and (max-width: 768px) .ecrsg5b0e0fff28753 display: block; @media screen and (max-width: 767px) .ecrsg5b0e0fff28753 display: block;
Whether it’s surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days is not short on unethical behavior. But it isn’t enough to just say “that’s creepy.” Fortunately, a course at the University of Washington is equipping its students with the philosophical insights to better identify — and fix — tech’s pernicious lack of ethics.
“Designing for Evil” just concluded its first quarter at UW’s Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the trade. But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical implications of those apps and services.
What, for example, is a good way of going about making a dating app that is inclusive and promotes healthy relationships? How can an AI imitating a human avoid unnecessary deception? How can something as invasive as China’s proposed citizen scoring system be made as user-friendly as it is possible to be?
I talked to all the student teams at a poster session held on UW’s campus, and also chatted with Hiniker, who designed the course and seemed pleased at how it turned out.
The premise is that the students are given a crash course in ethical philosophy that acquaints them with influential ideas, such as utilitarianism and deontology.
“It’s designed to be as accessible to lay people as possible,” Hiniker told me. “These aren’t philosophy students — this is a design class. But I wanted to see what I could get away with.”
The primary text is Harvard philosophy professor Michael Sandel’s popular book Justice, which Hiniker felt combined the various philosophies into a readable, integrated format. After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and then prescribe ethical remedies.
As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the trivial to the impossible. Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of what tech offers, or how it offers it, is inescapably and fundamentally unethical.
I found the students fell into one of three categories.
Not fundamentally unethical (but could use an ethical tune-up)
WebMD is of course a very useful site, but it was plain to the students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of symptoms. The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators.
Hello Barbie, the doll that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there’s no reason it can’t be done right. With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of keeping the dialogue with kids healthy and parents informed. The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device rather than in the cloud. Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest of the conversation secret.
Shop On SecondCovers
.ohteb5b0e0fff28870 margin: 5px; padding: 0px; @media screen and (min-width: 1201px) .ohteb5b0e0fff28870 display: block; @media screen and (min-width: 993px) and (max-width: 1200px) .ohteb5b0e0fff28870 display: block; @media screen and (min-width: 769px) and (max-width: 992px) .ohteb5b0e0fff28870 display: block; @media screen and (min-width: 768px) and (max-width: 768px) .ohteb5b0e0fff28870 display: block; @media screen and (max-width: 767px) .ohteb5b0e0fff28870 display: block;
WeChat Discover allows users to find others around them and see recent photos they’ve taken — it’s opt-in, which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China. It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don’t intend to. Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from strangers.
Netflix isn’t evil, but its tendency to promote binge-watching has robbed its users of many an hour. This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could interrupt the habit and encourage people to take back control of their time.
Fundamentally unethical (fixes are still worth making)
FakeApp is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn’t. It’s fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine. Watermarks visible and invisible, as well as controlled cropping of source videos, were this team’s suggestion, though ultimately the technology won’t yield to these voluntary mitigations. So really, an informed populace is the only answer. Good luck with that!
China’s “social credit” system is not actually, the students argued, absolutely unethical — that judgment involves a certain amount of cultural bias. But I’m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment. Their highly practical suggestions, however, were focused on making the system more accountable and transparent. Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so on.
Tinder’s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is very plainly designed to be a meat market. Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person. You’d have to swipe based on that before seeing any pictures. I suggested having some deal-breaker questions you’d have to agree on, as well. It’s not a bad idea, though open to gaming (like the rest of online dating).
Fundamentally unethical (fixes are essentially impossible)
The League, on the other hand, was a dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people paid to be among the self-selected “elite” and could filter by ethnicity and other troubling categories. Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product. Unfortunately, The League is an unethical product for unethical people. No amount of tweaking will change that.
Duplex was taken on by a smart team that nevertheless clearly only started their project after Google I/O. Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible. It could, of course, identify itself — but that would spoil the entire value proposition. But they also asked a question I didn’t think to ask myself in my own coverage: why isn’t this AI exhausting all other options before calling a human? It could visit the site, send a text, use other apps and so on. AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time it should say it’s an AI.
To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the fundamental ethical soundness of a business or technology and be able to articulate it.
That may be the difference in a meeting between being able to say something vague and easily blown off, like “I don’t think that’s a good idea,” and describing a specific harm and reason why that harm is important — and perhaps how it can be avoided.
As for Hiniker, she has some ideas for improving the course should it be approved for a repeat next year. A broader set of texts, for one: “More diverse writers, more diverse voices,” she said. And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics.
With any luck the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more COPPA-compliant smart toys and dating apps that don’t sabotage self-esteem.
0 notes
theinvinciblenoob · 7 years ago
Link
Whether it’s surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days is not short on unethical behavior. But it isn’t enough to just say “that’s creepy.” Fortunately, a course at the University of Washington is equipping its students with the philosophical insights to better identify — and fix — tech’s pernicious lack of ethics.
“Designing for Evil” just concluded its first quarter at UW’s Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the trade. But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical implications of those apps and services.
What, for example, is a good way of going about making a dating app that is inclusive and promotes healthy relationships? How can an AI imitating a human avoid unnecessary deception? How can something as invasive as China’s proposed citizen scoring system be made as user-friendly as it is possible to be?
I talked to all the student teams at a poster session held on UW’s campus, and also chatted with Hiniker, who designed the course and seemed pleased at how it turned out.
The premise is that the students are given a crash course in ethical philosophy that acquaints them with influential ideas, such as utilitarianism and deontology.
“It’s designed to be as accessible to lay people as possible,” Hiniker told me. “These aren’t philosophy students — this is a design class. But I wanted to see what I could get away with.”
The primary text is Harvard philosophy professor Michael Sandel’s popular book Justice, which Hiniker felt combined the various philosophies into a readable, integrated format. After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and then prescribe ethical remedies.
As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the trivial to the impossible. Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of what tech offers, or how it offers it, is inescapably and fundamentally unethical.
I found the students fell into one of three categories.
Not fundamentally unethical (but could use an ethical tune-up)
WebMD is of course a very useful site, but it was plain to the students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of symptoms. The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators.
Hello Barbie, the doll that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there’s no reason it can’t be done right. With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of keeping the dialogue with kids healthy and parents informed. The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device rather than in the cloud. Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest of the conversation secret.
WeChat Discover allows users to find others around them and see recent photos they’ve taken — it’s opt-in, which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China. It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don’t intend to. Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from strangers.
Netflix isn’t evil, but its tendency to promote binge-watching has robbed its users of many an hour. This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could interrupt the habit and encourage people to take back control of their time.
Fundamentally unethical (fixes are still worth making)
FakeApp is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn’t. It’s fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine. Watermarks visible and invisible, as well as controlled cropping of source videos, were this team’s suggestion, though ultimately the technology won’t yield to these voluntary mitigations. So really, an informed populace is the only answer. Good luck with that!
China’s “social credit” system is not actually, the students argued, absolutely unethical — that judgment involves a certain amount of cultural bias. But I’m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment. Their highly practical suggestions, however, were focused on making the system more accountable and transparent. Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so on.
Tinder’s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is very plainly designed to be a meat market. Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person. You’d have to swipe based on that before seeing any pictures. I suggested having some deal-breaker questions you’d have to agree on, as well. It’s not a bad idea, though open to gaming (like the rest of online dating).
Fundamentally unethical (fixes are essentially impossible)
The League, on the other hand, was a dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people paid to be among the self-selected “elite” and could filter by ethnicity and other troubling categories. Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product. Unfortunately, The League is an unethical product for unethical people. No amount of tweaking will change that.
Duplex was taken on by a smart team that nevertheless clearly only started their project after Google I/O. Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible. It could, of course, identify itself — but that would spoil the entire value proposition. But they also asked a question I didn’t think to ask myself in my own coverage: why isn’t this AI exhausting all other options before calling a human? It could visit the site, send a text, use other apps and so on. AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time it should say it’s an AI.
To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the fundamental ethical soundness of a business or technology and be able to articulate it.
That may be the difference in a meeting between being able to say something vague and easily blown off, like “I don’t think that’s a good idea,” and describing a specific harm and reason why that harm is important — and perhaps how it can be avoided.
As for Hiniker, she has some ideas for improving the course should it be approved for a repeat next year. A broader set of texts, for one: “More diverse writers, more diverse voices,” she said. And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics.
With any luck the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more COPPA-compliant smart toys and dating apps that don’t sabotage self-esteem.
via TechCrunch
0 notes
ynsespoir · 7 years ago
Text
5 Trends Shaping The Health Insurance Market
Insurance vs the rest of us: Does it have to be this way?
Most people realize that the best way to remain healthy is to enact lifestyle principles which incorporate regular diet and exercise. But even if you’re doing this correctly, you will have instances of injury that are either the fault of another or your missteps. They could even be hereditary conditions with a history of late-onset.
Heart disease of some variety has a 1 in 7 chance of killing you while being struck by lightning is one chance over more than 160,000 according to the same statistics. Between these extremes, there are many things likely to facilitate unexpected expiration; some of which health insurance solutions can help you with. Non-fatal wounds, injuries, and sickness are out there, too.
  Did you know that around 50,000 people in the US die annually from influenza? Weakness, malnutrition, and a lack of healthcare solutions are part of the reason why.
You want to have some form of insurance for that which cannot be predicted. At least, that’s been the trend over the last several decades in developed countries. Now, getting the right kind of coverage is something easier said than done, and it will cost you money on a monthly basis—in most cases! There are always programs available; it’s just qualifying for them that can be difficult.
If you don’t qualify for a government assistance solution, and you don’t have any healthcare benefits at your place of employment, you can expect to spend between $300 and $3,000 every month depending on the size and scope of your necessary insurance parameters.
It’s not uncommon for a family of three to pay more than $1,000 a month for insurance. This is especially likely if a change in occupation forces a family from an employer insurance plan valued at $25k in expense to the company on an annual basis, for something acquired in the non-corporate market. There may be needs the family has which predicate continued top-tier coverage.
  If a family in this situation had an employer who was footing the bill for that insurance, finding it again would require searching among the top-tier options out there. And even if that isn’t being done, according to this statistic mentioned earlier, the average cost of family insurance nationwide is over $800 a month. That’s $9,600+ annually.
But it is possible to pay less—it all depends on what your health insurance needs to end up being in the long run. One of the best ways to ensure you get the right healthcare at the right price is to shop around a little. Something else that can help you in this decision is understanding burgeoning trends as they pertain to today’s healthcare industry.
  1: Data Leveraging Regarding Healthcare
Big Data: It’s here, and it’s likely here to stay. This is because it’s going to keep becoming an intrinsic part of human society until some technological event horizon initiates the AI singularity science fiction writers have been getting manic about for decades. Now we’re not quite to the point where machines have taken over everything, but you can bet things are moving that direction, and 2018 will see an increase in such activity.
Imagine a machine that can perform a complex operation with zero errors. That machine would have to have critical thinking capacity or that which can mimic it. Now such complex operability is going to require a lot of data from a lot of professional medical practitioners, and a great deal of engineering acumen. There is a reason the medical industry is flush with resources. Such developments are costly!
Accordingly, today data is continuously being collected. Data is protected and collected on patients in pertinence to HIPAA regulations for the safety of those individuals. Data is collected on medical care institutions, and the levels of service they can provide. Data is collected about successful recoveries, and statistics related to those patients which can’t recover, or are unlikely to find recovery. All of these are reflections of Big Data as applied to medicine, for an in-depth look at how all these things interrelate, check out this article from www.ncbi.nlm.nih.gov.
  Insurance companies have always been data mavens when it comes to this sort of thing; Big Data not excluded. They have to be because the brokering of insurance as a service requires an accurate risk assessment, and statistics are the key.
Statistically, the cost of monthly coverage must be leveraged against the likelihood of its necessary implementation. Figuring out that information requires data that helps form a larger picture. So insurance companies are always seeking data where they can.
As an individual seeking insurance, you need to understand the value of data in modernity. It’s “binary” gold if you will. So what you provide, and how you provide it, could be important factors in the sort of healthcare coverage you’re likely to receive. Remember: the trend is toward increased computation in medicine, not less.
Whenever data takes over, a two-fold situation occurs. On the one hand, propensity for better service increases. On the other hand, there become greater instances of exploitable vulnerabilities in the system for those savvy enough to find them. It’s a double-edged sword, and it’s getting sharper every day going into 2018.
  2: Patient Personalization
Because of this profusion of data—of information technology; that is to say: technology that runs on, and deals in, information—you can expect patient personalization to expand drastically; especially among millennials. This is a win-win situation for everybody. Patients who don’t prefer certain doctors can avoid them, and doctors who wouldn’t prefer to deal with certain patients can help them find more suitable practitioners.
There is always a moral angle in medicine, whether it be one that is honorable or one that seeks to “game” the system. Now, most doctors aren’t going to be interested in collusion. Some patients want to work with a doctor who has a more controversial perspective on the system, and how it works.
As Big Data begins to revitalize medical technology, and how that technology is put to use serving patients, data about preferences and personalization will increasingly dominate the market. The way service is applied, who gets it in what way and many other things will come to define it. It makes sense to secure insurance options who understand these trends, and are working to provide those they ensure the kind of solutions which best fit them.
Telemedicine, that is to say, appointments made digitally, are quickly becoming convenient in the market. There are even remote-controllable ‘bots. These bots can be used to surrogate doctors, and even provide examination data with about the level of accuracy found in an office.
This leads to the next point, interestingly enough. Training in areas of augmented reality becomes necessary for practitioners to properly serve a clientele that is diversifying as both technological solutions and data expansion utterly revitalize what the term healthcare even means. There are augmentation trends today which have nothing to do with preexisting health conditions and injury, but with basic personal preference. Consider the hand-gesture technology currently being bandied about.
Also, those who are seeking government-funded gender reassignment surgery must be considered. Perhaps there is an argument for mental health here, perhaps not. Physicians must respect two things: one, the demands of the market may not necessarily reflect the best health practices available, and two: if insurance companies are willing to pay for procedures, somebody is going to perform those procedures and make millions of dollars at it regardless of ethics.
Such new social considerations predicate new trends that themselves are putting down roots. So this must be considered regarding the forward transitions of modern healthcare in developed countries as well. Never before has political affiliation made such a difference in the kinds of available coverage out there; just consider the impact that the ACA has had on the country, and the clear political motivation silhouetting it. Opportunities and dangers abound. Whatever side of the political fence you’re on, you must respect this reality.
  3: Augmented Reality Training
This was mentioned in the last point, and it’s something worth considering. Telemedicine and other similarly technologically saturated healthcare situations are increasingly predicting the need for physicians to exercise augmented reality. Augmented Reality, or AR for short, has a lot of potentials. Here are a video and article exploring how specifically it is being used in connection with medicine today. Full maturation of AR means complex operations can be performed by physicians at a distance.
If you haven’t seen AR before, it’s like VR, but it’s not quite the same. Consider Google Glass. Google Glass made it so that a little screen you wore kept you continuously online at all times, and able to exercise your online abilities with greater convenience.
Well, Augmented Reality uses the same technology to “float” data in front of real-world situations, allowing physicians to make quicker, more appropriate decisions about specific patients. Big Data is critical here, and available, secure information properly leveraged through the right tech providers will define certain healthcare availabilities going forward. It’s something to keep in mind.
Something else that’s worth keeping in mind is the industry’s continuously expansive boom despite increases in healthcare costs. You can see this by conducting a simple search in the market. Searching physician jobs by HospitalRecruiting.com will produce over 14,000 results for some search options; according to the site, a general search for physician jobs will produce: “…14,468 jobs matching…criteria.”
That’s a lot of specialized positions. Virtually none of those jobs will pay less than $100k a year. That means at minimum, the current job market for unattached physicians in the United States is at a low-end of around $2 billion.
There are physician jobs that pay more than $100k a year—many of them. If you were to average the open market at $10 billion, you would probably still be understating the case. The whole industry is worth $3 trillion; $10 billion is a drop in the bucket.
That’s a growth market, despite all the difficulties surfeiting the industry today. Part of that growth is regarding technology and related technology jobs. AR work needs trainers and medical professionals acting as “guinea pigs”. The same is true with VR (Virtual Reality) tech used for intensive procedures. Also, part of that growth is political. Again, it doesn’t matter where you stand on the issues when you take a step back and consider burgeoning healthcare options available to you, or their tangential effect on the entire market going forward.
    4: Wearable Devices And Smart Technology In Healthcare
When it comes to wearables, data collection is a key indicator of both their success and likelihood to become established echelons of modern healthcare—remember what Big Data teaches us regarding information collection.
The wearables that are most useful in data collection are those most likely to remain in the market—search for trends as you decide whether or not these are for you; there’s no clear winner out there yet, though things like the Fitbit are gaining popularity. There may even come to be, if there aren’t already, healthcare plans which only mete out coverage provided those insured have some wearable device on them at all times—as yet this isn’t the case.
As Big Data and AR/VR medicinal techniques expand, wearables which provide real-time information on patients who do and don’t have conditions can act as digital canaries in the coal mine, if you will. Billions of terabytes of data on human health statistics can indicate whether an increase in pulse at a certain time for a certain individual may be risky or innocuous.
Granted, there are likely to be some highly-publicized mistakes in this regard going forward, but in twenty years’ time it’s not without the realm of possibility that services like Verizon and T-Mobile will provide wearable health technology that helps prod and control people’s health, even alerting physicians when compromising health situations are imminent, and making it possible for them to AR/VR in and “save the day”, as it was, before things get too drastic.
Smartphones already act as wearable health technology. They can count your footsteps and take measurements about your age, weight, and height which demonstrate whether statistically your BMI is at healthy ratios, or you need more exercise. Then, dependent on that data, you may just receive updates that encourage you to be more active!
The truth is, the vast majority of conditions practitioners deal with are entirely avoidable—they’re called “chronic conditions”, and usually have at their root some behavior of the patient. Sedentary nature coupled with poor nutrition combine such that human bodies reach a state of degradation decades before they should. With the right technology and the right application of that technology, it’s possible to reduce these things.
Imagine being able to get a 50% reduction in healthcare expenses provided you wear some unobtrusive piece of information gathering technology which connects you to the IoT (Internet of Things)—which is a lot larger than most people understand—at all times. Such things are already on the horizon. 2018 may see a few similar innovations reveal themselves; so keep your eyes open!
  5: Changes In Medicare
Now Medicare is also going to experience some considerable changes going forward. ACA, or the “Affordable” Care Act, which has been rightfully nicknamed Obamacare, has proved itself a substantial disaster. Getting rid of it hasn’t been easy, though. It’s been de-enforced, and won’t be funded through enforced taxation going forward. This is going to have a collateral effect which serves to augment Medicare.
If you were unfamiliar with Medicare beforehand, you should keep in mind that even at its most streamlined, this government-breathed medical assistance solution has never been the most simple of provisions. You can check the government website for more in-depth information. Though to be fair, governments have a habit of obfuscating facts; especially as pertains to programs they provide.
Basically, there are four parts to this program—Medicare part A, B, C, D; and how these affect you will differ—consider what HealthMarkets.com has to say: “Each part [of Medicare] provides its unique coverage, and depending on which part you choose, the enrollment options will also vary.”
Obviously, you want the enrollment options which best match your particular situation. It makes sense to consult with medical practitioners closely associated with you to ensure you make the right choices here. Also, the trends of the market will trickle down to government solutions eventually. It’s a domino-effect thing: eventually, Medicare will be affected, come what may.
What does this mean to you? Well, watch Medicare. The wearable data-gathering technology alluded to earlier would be something a government organization would be interested in. There is a power in information collection, dissemination, and application.
The government prefers to do two of those three things, and you can likely guess which two. Data collection, and data application. So don’t be surprised when such operational exigencies begin to define Medicare through the application of new technology solutions.
Conclusion: Finding The Most Effective Healthcare For Your Situation
When you boil it all down, it doesn’t matter what kind of technology is involved in your healthcare solution if you’re not being served as you should. Also, you should likely consider what particular needs define your own situation as different from others. If you’re young and healthy, with no hereditary conditions on the horizon, you might go with a limited health care plan.
The older you get, the more conditions compound, and the more amenable solutions like wearable, data-gathering technology become—why is obvious: you have greater connectivity to necessary health coverage during instances that may potentially escalate very quickly into areas where a life can be threatened. Many senior citizens lose strength needed to the right themselves after a fall, and so even a slip could be fatal. Accordingly, this is also something you’re going to want to take into account going forward. We already know wearables are here. Barring some catastrophe, they will be here in the future. Are they worth your time? Generally, weigh out all your options, and find ways of keeping yourself informed about the healthcare market.
Healthcare solutions are at an exciting place right now where bad government programs are on the way out, and prosperity silhouetted in technology is sashaying into the center of public discourse. If you are careful to stay on top of the trends, you can get some deals. At the very least, you can get your head around a world that has changed so quickly even the youth seem to be left behind.
  The post 5 Trends Shaping The Health Insurance Market appeared first on ReferralMD.
from Health Care Technology – ReferralMD http://ift.tt/2BnFy6p via IFTTT 5 Trends Shaping The Health Insurance Market Health Care Technology from topofbestone http://ift.tt/2BqdRKy via IFTTT
0 notes
seanmeverett · 8 years ago
Text
Early Warning Signs of the Impending Market Crash
The Base Code: why you should sell your stock now and put it into cryptocurrencies
I. Setting the Stage
Over the last year, if you have been doing exactly what we said in The Base Code, then you would have made a +75% return on your money in the stock market, gaining almost $3,500 on a base of $4,500. To put that into perspective, if you would have invested only $1 million following The Base Code’s very simple strategy of BOSRUP (Buy On Sale, Reinvest Under Performers), you would have made $750,000 in a single year.
A quarter million dollar return in 52 weeks.
This morning, we sold all of our holdings and locked in the gain that we had been building as part of The Base Code’s fund for the last year. Below is a screenshot from our Robinhood account for proof. Read on to understand why.
II. Why We Sold
To make this as easy to read and impactful as possible, we’ll just put all of our observations into a simple bulleted list. Then you can judge for yourself.
The Dow reached its 13th all-time high. Granted it’s a basket of only about 30 stocks, but going from 20,000 to 21,000 in such short amount of time smells like irrational exhuberance.
On the front-page of today’s New York Times: “Why the Markets are Defying Forecasts of Doom & Gloom”.
The Fed is likely to raise interest rates in March as well as inflation forecasts going up.
Retailers are down across the board. See the “Reality Retail” section in Diary of a Madman, Page 24.
We’ve heard from various grocery clients and big box retail clients that sales are down even further and that they’re only buying once for the entire holiday season already. Frozen foods are down. Organic sections aren’t doing as hot.
Consumer staples are down across the board even while financial stocks are rising. That means the confidence is buoying derivative asset classes, but people have stopped spending money on clothes, more expensive food, and have switched their habits to something else.
Consumers are buying rice, meat, some vegetables.
Inflation has continued to increase prices while paychecks haven’t kept pace. Which means your dollar doesn’t go as far.
Ethereum’s price has increased from $12 to $19 in only a few weeks. Bitcoin’s rise is doing the same. You go into this asset class if you’re scared of the rest of the market going south.
NVIDIA, our largest holding, beat expectations with its last earnings release but the price didn’t budge which means future growth of AI is already baked into the price.
Apple, our second largest holding, is now seriously overvalued, especially after Buffett dumped $17 billion into it last quarter.
You sell stock when it’s at an all-time high, not at an all-time low. Sure, we might lose some on the upside but reading the tea leaves seems like the crash will come in the fall.
April 15th is tax day, which we feel will be the real catalyst when all this starts to become real. Especially when people have to pay out of their pockets to the federal government reducing discretionary spending even further.
Middle Americans are leaving their retirement in the market (because they don’t know where else to put it), have much of their value tied up in their homes, so they change habits where they can: spending less money on entertainment like the Hollywood Box Office, going out to eat (some food restaurant earnings are down), purchases at the grocery stores and clothing retail are down, and you will start to see Starbucks also hurting as people won’t spend $5 on a latte any longer.
Watch as alcohol and cigarettes start going up.
Why isn’t anyone noticing and why isn’t the same trigger that happened last time happening? That is, why is no one talking about a bubble or an expected crash? Very simple: the media is distracted with Trump. It’s been on the New York Time’s front page for months. It’s so much noise, and coupled with Bravo TV as an escape, that people are missing it this time.
We’re early. That’s true. We may have another 6 to 9 months or even a year. But when it happens, just like every other crash, it happens fast. And even with your finger on the trigger you can’t react fast enough.
But make no mistake. When people stop buying the core things that make our economy run: food, clothes, entertainment, then it has a massive ripple effect. They are leading indicators of something not so great happening on the horizon.
III. Where Should We Put Our Money?
It used to be that people would put their money into gold when their confidence faltered on a nation’s economy. The reason is that gold has value outside of just a currency like a piece of paper backed by a federal government. It has intrinsic value.
As of this morning (March 2, 2017), a very interesting thing happened. Bitcoin’s price is the same as gold.
For those unfamiliar, Bitcoin is a cryptocurrency. It’s partially decentralized, which means there’s no government that owns it and it’s uncorrelated with other asset classes. But it does have its problems. Even though it’s the biggest, it is controlled by a small group of individuals who are currently fighting over the right approach to take in the future.
So there’s another cryptocurrency that’s worth watching. That’s Ethereum. It has over $1 billion in market value which, as a new startup, is a pretty compelling thing by itself. Growing from $0 to over $1B in a few years time based solely on the value of people using it to build things on.
Even the New York Times announced that big businesses are going to be building new computing systems on top of the Ethereum platform. JP Morgan Chase, Microsoft, Bank NY Mellon.
We’ve also talked to developers who’ve built reference products on the platfrom. It’s stable, a new computing paradigm and everyone agrees that the development team building Ethereum is crazy smart but also morally ethical.
So, as you think of an asset class that wouldn’t also crash in response to the stock market crash, but rather continue to go up because it’s a safer place to store capital, cryptocurrencies sound like a better solution than Gold.
The value is tied to the number of “app developers” building on top of Ethereum, for instance. Use Apple and their apps as an analogy. The more iOS apps you have in the Apple ecosystem, the more it attracts new developers, the more it attracts users, the more valuable Apple becomes. Only imagine that these apps also act like currency and you can invest in the entire kit n kaboodle.
I know a few folks who are getting into Ethereum. And even though they specify that you shouldn’t treat it as a currency or investment class vehicle, the fact remains that you’re ultimately investing in a startup that’s growing quickly, becomes more valuable as more people build on top of it, already have major global companies doing it, and is uncorrelated with the rest of the market.
One last point. China is working on their own cryptocurrency that they will use to take over part of their money supply. As they use more of WeChat and digital payments, this makes a whole lot of sense. Eventually, you should expect The Base Code to keep part of its interest in Chinese cryptocurrency, as it creates a hedge against the American economy, there are over 1 billion people all connected and using a single platform, while enabling the economists in the government to make real-time monetary policy changes in response to consumer behavior happening digitally.
At that point, China will be ahead of America, at least from a currency perspective.
One last point. Imagine the entire global system crumbles. Banks fail. The cash you have sitting in their digital vaults disappears overnight. You’re overseas you can’t access your cash. The FDIC can’t insure the entirety of American savings, as paltry as it might be. The only safe harbor that allows for 24/7/365 liquidity without being subject to the rampant fraud we saw leading up to the housing market collapse, is a cryptocurrency.
Now, no individual’s personal incentive to deceive or make back-room deals can get in the way of your digital currency. And you can still shift it to a bank to get cash out. Some even have ATMs for these digital currencies.
Again, we’re early. It may be a sign of failure in the startup industry. But it’s a sign of Alpha in the investing industry.
— Sean
Your Recommended Reading
The Base Code
Making a Return When the Dow Is Down
The Market at an All-Time High
Invest in Apple at $90
The Gravity of a New Perspective
Early Warning Signs of the Impending Market Crash was originally published in Humanizing Tech on Medium, where people are continuing the conversation by highlighting and responding to this story.
from Stories by Sean Everett on Medium http://ift.tt/2lidoTC
0 notes