Text
Voices in AI – Episode 86: A Conversation with Amir Husain
[voices_in_ai_byline]
About this Episode
Episode 86 of Voices in AI features Byron speaking with fellow author Amir Husain about the nature of Artificial Intelligence and Amir’s book The Sentient Machine.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Amir Husain. He is the founder and CEO of SparkCognition Inc., and he’s the author of The Sentient Machine, a fine book about artificial intelligence. In addition to that, he is a member of the AI task force with the Center for New American Security. He is a member of the board of advisors at UT Austin’s Department of Computer Science. He’s a member of the Council on Foreign Relations. In short, he is a very busy guy, but has found 30 minutes to join us today. Welcome to the show, Amir.
Amir Husain: Thank you very much for having me Byron. It’s my pleasure.
You and I had a cup of coffee a while ago and you gave me a copy of your book and I’ve read it and really enjoyed it. Why don’t we start with the book. Talk about that a little bit and then we’ll talk about SparkCognition Inc. Why did you write The Sentient Machine: The Coming Age of Artificial Intelligence?
Byron, I wrote this book because I thought that there was a lot of writing on artificial intelligence—what it could be. There’s a lot of sci fi that has visions of artificial intelligence and there’s a lot of very technical material around where artificial intelligence is as a science and as a practice today. So there’s a lot of that literature out there. But what I also saw was there was a lot of angst back in 2015, 2014. I actually had a personal experience in that realm where outside of my South by Southwest talks there was an anti-AI protest.
So just watching those protesters and seeing what their concerns were, I felt that a lot of the sort of philosophical questions, existential questions around the advent of AI, if AI indeed ends up being like Commander Data, it has sentience, it becomes artificial general intelligence, then it will be able to do the jobs better than we can and it will be more capable in let’s say the ‘art of war’ than we are and therefore does this mean that we will lose our jobs. We will be meaningless and our lives will be lacking in meaning and maybe the AI will kill us?
These are the kinds of concerns that people have had around AI and I wanted to sort of reflect on notions of man’s ability to create—the aspects around that that are embedded in our historical and religious tradition and what our conception of Man vs. he who can create, our creator—what those are and how that influences how we see this age of AI where man might be empowered to create something which can in turn create, which can in turn think.
There’s a lot of folks also that feel that this is far away, and I am an AI practitioner and I agree I don’t think that artificial general intelligence is around the corner. It’s not going to happen next May, even though I suppose some group could surprise us, but the likely outcome is that we are going to wait a few decades. I think waiting a few decades isn’t a big deal because in the grand scheme of things, in the history of the human race, what is a few decades? So ultimately the questions are still valid and this book was written to address some of those existential questions lurking in elements of philosophy, as well as science, as well as the reality of where AI stands at the moment.
So talk about those philosophical questions just broadly. What are those kinds of questions that will affect what happens with artificial intelligence?
Well I mean one question is a very simple one of self-worth. We tend to define ourselves by our capabilities and the jobs that we do. Many of our last names in many cultures are literally indicative of our profession. You know goldsmiths as an example, farmer as an example. And this is not just a European thing. Across the world you see this phenomenon of last names just reflecting the profession of a woman or a man. And it is to this extent that we internalize the jobs that we do as essentially being our identity, literally to the point where we take it on as a name.
So now when you de-link a man or a woman’s ability to produce or to engage in that particular labor that is a part of their identity, then what’s left? Are you still, the human that you were with that skill? Are you less of a human being? Is humanity in any way linked to your ability to conduct this kind of economic labor? And this is one question that I explored in the book because I don’t know whether people really contemplate this issue so directly and think about it in philosophical terms, but I do know that subjectively people get depressed when they’re confronted with the idea that they might not be able to do the job that they are comfortable doing or have been comfortable doing for decades. So at some level obviously it’s having an impact.
And the question then is: is our ability to perform a certain class of economic labor in any way intrinsically connected to identity? Is it part of humanity? And I sort of explore this concept and I say “OK well, let’s sort of take this away and let’s cut this away let’s take away all of the extra frills, let’s take away all of what is not absolutely fundamentally uniquely human.” And that was an interesting exercise for me. The conclusions that I came to—I don’t know whether I should spoil the book by sharing it here—but in a nutshell—this is no surprise—that our cognitive function, our higher order thinking, our creativity, these are the things which make us absolutely unique amongst the known creation. And it is that which makes us unique and different. So this is one question of self worth in the age of AI, and another one is…
Just to put a pin in that for a moment, in the United States the workforce participation rate is only about 50% to begin with, so only about 50% of people work because you’ve got adults that are retired, you have people who are unable to work, you have people that are independently wealthy… I mean we already had like half of adults not working. Does it does it really rise to the level of a philosophical question when it’s already something we have thousands of years of history with? Like what are the really needy things that AI gets at? For instance, do you think a machine can be creative?
Absolutely I think the machine can be creative.
You think people are machines?
I do think people are machines.
So then if that’s the case, how do you explain things like the mind? How do you think about consciousness? We don’t just measure temperature, we can feel warmth, we have a first person experience of the universe. How can a machine experience the world?
Well you know look there’s this age old discussion about qualia and there’s this discussion about the subjective experience, and obviously that’s linked to consciousness because that kind of subjective experience requires you to first know of your own existence and then apply the feeling of that experience to you in your mind. Essentially you are simulating not only the world but you also have a model of yourself. And ultimately in my view consciousness is an emergent phenomenon.
You know the very famous Marvin Minsky hypothesis of The Society of Mind. And in all of its details I don’t know that I agree with every last bit of it, but the basic concept is that there are a large number of processes that are specialized in different things that are running in the mind, the software being the mind, and the hardware being the brain, and that the complex interactions of a lot of these things result in something that looks very different from any one of these processes independently. This in general is a phenomenon that’s called emergence. It exists in nature and it also exists in computers.
One of the first few graphical programs that I wrote as a child in basic [coding] was drawing straight lines, and yet on a CRT display, what I actually saw were curves. I’d never drawn curves but it turns out that when you light a large number of pixels with a certain gap in the middle and it’s on a CRT display there there are all sorts of effects and interactions like the Moire effect and so on and so forth where what you thought you were drawing was lines, and it shows up if you look at it from an angle, as curves.
So I mean the process of drawing a line is nothing like drawing a curve, there was no active intent or design to produce a curve, the curve just shows up. It’s a very simple example of a kid writing a few lines of basic can do this experiment and look at this but there are obviously more complex examples of emergence as well. And so consciousness to me is an emergent property, it’s an emergent phenomenon. It’s not about the one thing.
I don’t think there is a consciousness gland. I think that there are a large number of processes that interact to produce this consciousness. And what does that require? It requires for example a complex simulation capability which the human brain has, the ability to think about time, to think about objects, model them and to also apply your knowledge of physical forces and other phenomena within your brain to try and figure out where things are going.
So that simulation capability is very important, and then the other capability that’s important is the ability to model yourself. So when you model yourself and you put yourself in a simulator and you see all these different things happening, there is not the real pain that you experience when you simulate for example being struck by an arrow, but there might be some fear and a why is that fear emanating? It’s because you watch your own model in your imagination, in your simulation suffer some sort of a problem. And now that is a very internal. Right? None of this has happened in the external world but you’re conscious of this happening, so to me at the end of the day it has some fundamental requirements. I believe simulation and self modeling are two of those requirements, but ultimately it’s an emergent property.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
2 notes
·
View notes
Text
Voices in AI – Episode 85: A Conversation with Ilya Sutskever
[voices_in_ai_byline]
About this Episode
Episode 85 of Voices in AI features host Byron Reese and Ilya Sutskever of Open AI talk about the future of general intelligence and the ramifications of building a computer smarter than us.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Ilya Sutskever. He is the co-founder and the chief scientist at OpenAI, one of the most fascinating institutions on the face of this planet. Welcome to the show Ilya.
Ilya Sutskever: Great to be here.
Just to bring the listeners up to speed, talk a little bit about what OpenAI is, what its mission is, and kind of where it’s at. Set the scene for us of what OpenAI does.
Great, for sure. The best way to describe OpenAI is this: so at OpenAI we take the long term view that eventually computers will become as smart or smarter than humans in every single way. We don’t know when it’s going to happen, — some number of years, something [like] tens of years, it’s unknown. And the goal of OpenAI is to make sure that when this does happen, when computers which are smarter than humans are built, when AGI is built, then its benefits will be widely distributed. We want it to be a beneficial event, and that’s the goal of OpenAI.
And so we were founded three years ago, and since then we’ve been doing a lot of work in three different areas. We’ve done a lot of work in AI capabilities and over the past three years we’ve done a lot of work we are very proud of. Some of the notable highlights are: our Dota results where we had the first and very convincing demonstration of an agent playing a real time strategy game, trained the reinforcement learning with no human data. We’ve trained robots to record, robot hands to re-orientate the block. This was really cool, it was cool to see it transfer.
And recently we’ve released the GPT-2 — a very large language model which can generate very realistic text as well as solve lots of different energy problems [with] a very high level of accuracy. And so this has been our working capabilities.
Another thrust to the work that we are doing is AI safety, which at [its] core is the problem of finding ways of communicating a very complicated reward function to an agent so that the agent that we build, can achieve goals and great competence. It will do so while taking human values and preferences into account. And so we’ve done some significant amount of work there as well.
And the third line of work we’re doing is AI policy, where we basically have a number of really good people thinking hard about what kind of policies should be designed and how should governments and other institutions respond to the fact that AI is improving pretty rapidly. But overall our goal, eventually the end game of the field, is that AGI will be built. The goal of OpenAI is to make sure that the development of AGI will be a positive event and that its benefits are widely distributed.
So 99.9% of all the money that goes into AI is working on specific narrow AI projects. I tried to get an idea of how many people are actually working on AGI and I find that to be an incredibly tiny number. There’s you guys, maybe you would say Carnegie Mellon, maybe Google, there’s a handful, but is my sense of that wrong? Or do you think there are lots of groups of people who are actually explicitly trying to build a general intelligence?
So explicitly. OK, a great question. So it’s an explicitly… most people, most research labs are indeed not having this as their goal, but I think that many people, the work of many people indirectly contributes to this. Where for example the fact is that much better learning algorithms, better network architecture, better optimization methods, all tools which are classically categorized as conventional machine learning, they also are likely to be directly contributing to those…
Well let’s stop there for a second, because I noticed you changed your word there to “likely.” Do you still think it’s an open question whether narrow AI, whatever technologies we have that do that, is it an open question whether that has anything to do with general intelligence, or is it still the case that a general intelligence might have absolutely nothing to do with that propagation, neural nets and machine learning?
So I think it’s very highly unlikely. Sorry. I want to make it clear, I think that the tools, that is the field of machine learning that is developing today, such as deep networks, backpropagation, — I think those are immensely powerful tools, and I think that it is likely that they will stay with us, with the field, for a long time all the way until we build very true general intelligence. At the same time I also believe, I want to emphasize that, important missing pieces exist and we haven’t figured out everything. But I think that the deep learning has proven itself to be so versatile and so powerful and it’s basically been exceeding our expectations in every turn. And so for these reasons I feel that deep learning is going to stay with us.
Well let’s talk about that though, because one could summarize the techniques we have right now as: let’s take a lot of data about the past, let’s look for patterns in that data and let’s make predictions about the future, which isn’t all that exciting when you say it like that. It’s just that we’ve gotten very good at it.
But why do you believe that method is the solution to things like creativity, intuition, emotion and all of these kind of human abilities? It seems to be at an intuitive level that if you want to teach a machine to play Dota or Go or whatever, yeah that works great. But really when you come down to human level intelligence with its versatility, with transferred learning with all the things we do effortlessly, it’s not even… it doesn’t seem at first glance to be a match. So why do you suspect that it is?
Well I mean I can tell you how I look at it. So for example you mentioned intuition is one thing which – so you used the certain phrase to describe the current tools where you kind of look for patterns in the past data and you use that to make predictions about the future and therefore it sounds not exciting. But I don’t know if I’d agree with that statement. And on the question of intuition, I can tell you a story about about AlphaGo. So… if you look at how AlphaGo works, there is a convolutional neural network.
OK actually let me give you a better analogy – so I believe there is a book by Malcolm Gladwell where he talks about experts, and one of the things that he has to say about experts is that an expert as a result of all their practice. They can look at a very complicated situation and then they can instantly tell like the three most important things in this situation. And then they think really hard about which of those things is really important. And apparently the same thing happens with Go players, where a Go player might look at the board and then instantly see the most important moves and then do a little bit of thinking about those moves. And like I said, instantly seeing what those moves are, — this is their intuition. And so I think that it’s basically unquestionable with the neural network that’s inside AlphaGo calculates a solution very well. So I think I think it’s not correct to say that intuition cannot be captured.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
1 note
·
View note
Text
How Intel’s Newest Product Enhancements Could Redefine the Future of Infrastructure Design
On the 2nd of April, at their Data Centric Innovation Day, Intel announced a slew of new products, including brand spanking new ones as well as updates to existing product line-ups. There was some news for everybody – whether your specific interests are in datacenter, or edge, or infrastructures. Among other things, what impressed me the most was the new Optane DC Persistent Memory DIMM, and even more so its implications when coupled with the new (56-core) Intel Xeon Platinium 9200 CPU.
Datacenter Optanization!
Up until yesterday, Optane was already considered a great technology, although to some extent it was seen as a niche product. Although more and more vendors are adopting it as an adequate substitute of NVDIMMs as a tier 0 or cache for their storage systems, it was still harder to foresee a broader adoption of the technology. Yes, it is cheaper than RAM and faster than a standard NAND-based device, but that’s about it.
Maybe it was part of Intel’s strategy. In fact, the first generation of Intel Optane products was developed with Micron, and with the first generation of products perhaps they didn’t want to be too aggressive — something which has most likely radically changed after their divorce. The introduction of Optane DC Persistent Memory DIMM actually offers a good idea of the real potential and benefits of this technology and demonstrates how this could change the way infrastructures will be designed in the future, especially for all data-intensive workloads. In practice, in a very simplistic way, Optane DC Persistent Memory DIMMs work in conjunction with standard DDR4 RAM DIMMs. They are bigger (up to 512GB each) and slightly slower than DDR4 DIMMs, but they allow configuration of servers with several TBs of memory at a very low cost. Optane is slower than RAM, but caching techniques and the fact that these DIMMs sit next to the CPU make the solution a good compromise. At the end of the day, you are trading some performance for a huge amount of capacity, avoiding data starvation for the CPU that otherwise should access data on SSD or, even worse, HDDs or network.
How Does It Work?
There are two operation modes for Optane DIMMs, persistent and non-persistent. I know it could sound confusing, but it’s actually very straight forward.
The way the DIMM operates is selected at the beginning of the bootstrap process. When non-persistent mode is selected, the DIMMs look like RAM, and the real RAM is used as a cache. This means that you don’t have to re-write your app, and practically any application can benefit from the increased memory capacity. On the other hand, when Optane DIMMs operate in persistent mode, it is the application that stores data directly in the Optane and manages RAM and Optane as it fits and, as the name suggests, your data is still there after a reboot. SAP HANA, for example, has already demonstrated this mode and there are several other applications that will follow suit. Take a look at the video below, recorded at the Tech Field Day event that followed the main presentation, its a good deep dive on the product.
youtube
There Are Several Benefits (And a Few Trade-Offs)
All the performance tests shown during the event demonstrated the benefits of this solution. Long story short, and crystal clear, every application that relays on large amounts of RAM to work will see a huge benefit. This is mostly because the latency to reach active data is lower than any other solution on the market, but it is also because it will do so at a cost that is a fraction of what you’d get from a 100% configuration. This is not possible today due to the size of the DIMMs and the cost. All of this translates into faster results, fewer servers to obtain them, better efficiency and, of course, lower infrastructure costs.
Furthermore, with more and more applications taking advantage of the persistent memory mode, we will see interesting applications with Optane DIMMs that could replace traditional, and expensive, NVDIMMs in many scenarios like, for example, storage controllers. What are the trade-offs then? Actually, these are not real trade-offs, but the consequences of the introduction of this innovative technology. In fact, Optane DIMMs work only on new servers based on the latest Intel CPUs, those announced alongside the DIMMs. The reason for this is that the memory controller is in the CPU, and older CPUs wouldn’t be able to understand the Optane DIMM nor manage the interaction between RAM and Optane.
youtube
Maintaining the Balance
As mentioned earlier, Intel announced many other products alongside Optane Persistent Memory DIMMs. All of them are quite impressive and, at the same time, necessary to justify each other; meaning that it would be useless to have multi-TB RAM systems without a CPU strong enough to get the work done. The same goes for the network, which can quickly become a bottleneck if you don’t provide the necessary bandwidth to get data back and forth from the server.
From my point of view, it’s really important to understand that we are talking about data-crunching monsters here, with the focus on applications like HPC, Big Data, AI/ML and the like. These are not your average servers for VMs, not today at least. On the other hand, it is also true that this technology opens up many additional options for enterprise end users too, including the possibility to create larger VMs or consolidate more workloads on a single machine (with all its pros and cons).
Another feature which I thought noteworthy is the new set of instructions added to the new Xeon Platinum 9200 CPU for deep learning. We are far from having general purpose CPUs competing against GPUs, but the benchmark given during the presentations shows an incredible improvement in this area regarding Inference workloads (the process that is behind the day-to-day ML activity after the neural network is trained). Intel has done a great job, both on hardware and software. With more and more applications taking advantage of AI/ML to work, an increasing number of users will be able to benefit from it.
youtube
Closing the Circle
In this article, I’ve covered only a few aspects of these announcements, those that are more related to my job. There is much more to it, including IoT, edge computing, and security. I was especially impressed because they express a vision that is broad, clear and without doubts, providing answers for all the most challenging aspects of modern IT.
Most of the products presented during this announcement are focused on ultimate performance and efficiency or, at least, in finding the best compromise to serve next-gen data-hungry and high demanding applications. Something which is beyond the reach of many enterprises today and more in the ballpark of web and hyper scalers. That said, even if on a smaller scale, all enterprises are beginning to face these kinds of challenges and, no matter if the solutions come from the cloud or their on-prem infrastructure, the technology to do it is now more accessible than ever.
1 note
·
View note
Text
Voices in AI – Episode 84: A Conversation with David Cox
[voices_in_ai_byline]
About this Episode
Episode 84 of Voices in AI features host Byron Reese and David Cox discuss classifications of AI, and how the research has been evolving and growing
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI, brought to you by GigaOm and I’m Byron Reese. I’m so excited about today’s show. Today we have David Cox. He is the Director of the MIT IBM Watson AI Lab, which is part of IBM Research. Before that he spent 11 years teaching at Harvard, interestingly in the Life Sciences. He holds an AB degree from Harvard in Biology and Psychology, and he holds a PhD in Neuroscience from MIT. Welcome to the show David!
David Cox: Thanks. It’s a great pleasure to be here.
I always like to start with my Rorschach question which is, “What is intelligence and why is Artificial Intelligence artificial?” And you’re a neuroscientist and a psychologist and a biologist, so how do you think of intelligence?
That’s a great question. I think we don’t necessarily need to have just one definition. I think people get hung up on the words, but at the end of the day, what makes us intelligent, what makes other organisms on this planet intelligent is the ability to absorb information about the environment, to build models of what’s going to happen next, to predict and then to make actions that help achieve whatever goal you’re trying to achieve. And when you look at it that way that’s a pretty broad definition.
Some people are purists and they want to say this is AI, but this other thing is just statistics or regression or if-then-else loops. At the end of the day, what we’re about is we’re trying to make machines that can make decisions the way we do and sometimes our decisions are very complicated. Sometimes our decisions are less complicated, but it really is about how do we model the world, how do we take actions that really drive us forward?
It’s funny, the AI word too. I’m a recovering academic as you said. I was at Harvard for many years and I think as a field, we were really uncomfortable with the term ‘AI.’ so, we desperately wanted to call it anything else. In 2017 and before we wanted to call it ‘machine learning’ or we wanted to call it ‘deep learning’ [to] be more specific. But in 2018 for whatever reason, we all just gave up and we just embraced this term ‘AI.’ In some ways I think it’s healthy. But when I joined IBM I was actually really pleasantly surprised by some framing that the company had done.
IBM does this thing called the Global Technology Outlook or GTO which happens every year and the company tries to collectively figure out—research plays a very big part of this—we try to figure out ‘What does the future look like?’ And they came up with this framing that I really like for AI. They did something extremely simple. They just put some adjectives in front of AI and I think it clarifies the debate a lot.
So basically, what we have today like deep learning, machine learning, tremendously powerful technologies are going to disrupt a lot of things. We call those Narrow AI and I think that narrow framing really calls attention to the ways in which even if it’s powerful, it’s fundamentally limited. And then on the other end of the spectrum we have General AI. This is a term that’s been around for a long time, this idea of systems that can decide what they want to do for themselves that are broadly autonomous and that’s fine. Those are really interesting discussions to have but we’re not there as a field yet.
In the middle and I think this is really where the interesting stroke is, there’s this notion we have a Broad AI and I think that’s really where the stakes are today. How do we have systems that are able to go beyond what we have that’s narrow without necessarily getting hung up on all these notions of what ‘General Intelligence’ might be. So things like having systems that are that are interpretable, having systems that can work with different kinds of data that can integrate knowledge from other sources, that’s sort of the domain of Broad AI. Broad Intelligence is really what the lab I lead is all about.
There’s a lot in there and I agree with you. I’m not really that interested in that low end and what’s the lowest bar in AI. What makes the question interesting to me is really the mechanism by which we are intelligent, whatever that is, and does that intelligence require a mechanistic reductionist view of the world? In other words, is that something that you believe we’re going to be able to duplicate either… in terms of its function, or are we going to be able to build machines that are as versatile as a human in intelligence, and creative and would have emotions and all of the rest, or is that an open question?
I have no doubt that we’re going to eventually, as a human race be able to figure out how to build intelligent systems that are just as intelligent as we are. I think in some of these things, we tend to think about how we’re different from other kinds of intelligences on Earth. We do things like… there was a period of time where we wanted to distinguish ourselves from the animals and we thought of reason, the ability to reason and do things like mathematics and abstract logic was what was uniquely human about us.
And then, computers came along and all of a sudden, computers can actually do some of those things better than we can even in arithmetic and solving complex logic problems or math problems. Then we move towards thinking that maybe it’s emotion. Maybe emotion is what makes us uniquely human and rational. It was a kind of narcissism I think to our own view which is understandable and justifiable. How are we special in this world?
But I think in many ways we’re going to end up having systems that do have something like emotion. Even you look at reinforcement learning—those systems have a notion of reward. I don’t think it’s such a far reach to think maybe we’ll even in a sci-fi world have machines that have senses of pleasure and hopes and ambitions and things like that.
At the end of day, our brains are computers. I think that’s sometimes a controversial statement but it’s one that I think is well-grounded. It’s a very sophisticated computer. It happens to be made out of biological materials. But at the end of the day, it’s a tremendously efficient, tremendously powerful, tremendously parallel nanoscale biological computer. These are like biological nanotechnology. And to the extent that it is a computer and to think to the extent that we can agree on that, Computer Science gives us equivalencies. We can build a computer with different hardware. We don’t have to emulate the hardware. We don’t have to slavishly copy the brain, but it is sort of a given that will eventually be able to do everything the brain does in a computer. Now of course all that’s all farther off, I think. Those are not the stakes—those aren’t the battlefronts that we’re working on today. But I think the sky’s the limit in terms of where AI can go.
You mentioned Narrow and General AI, and this classification you’re putting in between them is broad, and I have an opinion and I’m curious of what you think. At least with regards to Narrow and General they are not on a continuum. They’re actually unrelated technologies. Would you agree with that or not?
Would you say like that a narrow (AI) gets a little better then a little better, a little better, a little better, a little better, then, ta-da! One day it can compose a Hamilton, or do you think that they may be completely unrelated? That this model of, ‘Hey let’s take a lot of data about the past and let’s study it very carefully to learn to do one thing’ is very different than whatever General Intelligence is going to be.
There’s this idea that if you want to go to the moon, one way to go to the moon—to get closer to the moon—is to climb the mountain.
Right. Exactly.
And you’ll get closer, but you’re not on the right path. And, maybe you’d be better off on top of a building or a little rocket and maybe go as high as the tree or as high as the mountain, but it’ll get you where you need to go. I do think there is a strong flavor of that with today’s AI.
And in today’s AI, if we’re plain about things, is deep learning. This model… what’s really been successful in deep learning is supervised learning. We train a model to do every part of seeing based on classifying objects and you classify a lot – many images, you have lots of training data and you build a statistical model. And that’s everything the model has ever seen. It has to learn from those images and from that task.
And we’re starting to see that actually the solutions you get—again, they are tremendously useful, but they do have a little bit of that quality of climbing a tree or climbing a mountain. There’s a bunch of recent work suggesting… basically they’re looking at texture, so a lot of solution for supervision is looking at the rough texture.
There are also some wonderful examples where you take a captioning system—a system can take an image and produce a caption. You can produce wonderful captions in cases where the images look like the ones it was trained on, but you show it anything just a little bit weird like an airplane that’s about to crash or a family fleeing their home on a flooding beach and it’ll produce things like an airplane is on the tarmac at an airport or a family is standing on a beach. It’s like they kind of missed the point, like it was able to do something because it learned correlations between the inputs it was given and the outputs that we asked it for, but it didn’t have a deep understanding. And I think that’s the crux of what you’re getting at and I agree at least in part.
So with Broad, the way you’re thinking of it, it sounds to me just from the few words you said, it’s an incremental improvement over Narrow. It’s not a junior version of General AI. Would you agree with that? You’re basically taking techniques we have and just doing them bigger and more expansively and smarter and better, or is that not the case?
No. When we think about Broad AI, we really are thinking about a little bit ‘press the reset button, don’t throw away things that work.’ Deep learning is a set of tools which is tremendously powerful, and we’d be kind of foolish to throw them away. But when we think about Broad AI, what we’re really getting at is how do we start to make contact with that deep structure in the world… like commonsense.
We have all kinds of common sense. When I look at a scene I look at the desk in front of me, I didn’t learn to do tasks that have to do with the desk in front of me by lots and lots of labeled examples or even many, many trials in a reinforcement learning kind of setup. I know things about the world – simple things. And things we take for granted like I know that my desk is probably made of wood and I know that wood is a solid, and solids can’t pass through other solids. And I know that it’s probably flat, and if I put my hand out I would be able to orient it in a position that would be appropriate to hover above it…
There are all these affordances and all this super simple commonsense stuff that you don’t get when you just do brute force statistical learning. When we think about Broad AI, we’re really thinking about is ‘How do we infuse that knowledge, that understanding and that commonsense?’ And one area that we’re excited about and that we’re working on here at the MIT IBM Lab is this idea of neuro-symbolic hybrids.
So again, this is in the spirit of ‘don’t throw away neural-networks.’ They’re wonderful in extracting certain kinds of statistical structure from the world – convolutional neural network does wonderful job of extracting information from an image. LSDMs and recurrent neural networks do a wonderful job of extracting structure from natural language, but building in symbolic systems as first-class citizens in a hybrid system that combines those all together.
Some of the work we’re doing now is building systems where we use neural networks to extract structure from these noisy, messy inputs of vision and different modalities but then actually having symbolic AI systems. Symbolic AI systems have been around basically contemporaneous with neural networks. They’ve been ‘in the wings’ all this time. Neural networks deep learning is in any way… everyone knows this is a rebrand of the neural networks from the 1980s that are suddenly powerful again. They’re powerful for the first time because we have enough data and we have enough compute.
I think in many ways a lot of the symbolic ideas, sort of logical operations, planning, things like that. They’re also very powerful techniques, but they haven’t really been able to shine yet partly because they’ve been waiting for something—just the way that neural networks were waiting for compute and data to come along. I think in many ways some of these symbolic techniques have been waiting for neural networks to come along—because neural networks can kind of bridge that [gap] from the messiness of the signals coming in to this sort of symbolic regime where we can start to actually work. One of things we’re really excited about is building these systems that can bridge across that gap.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
0 notes
Text
The Future of Software Innovation? Hardware-Enabled AI & ML Innovation
Hardware innovation is a fickle beast. It takes money, lots of money. It takes time, a great team, and execution in more than twenty separate domains, many of which are often overlooked until it’s too late (certifications anyone?!). But, I’ve got some good news. Hardware is back and it’s about to get really exciting.
You’re probably thinking right now, “All I ever read about is how AI, ML, blockchain, and XR are ready to revolutionize the world” and that’s exactly the point. There are some amazing software technologies coming out, but this “new” software has hardware in its DNA. Until recently, smart technologies have largely been limited by their access points: computers, tablets, smartphones, etc. Going forward, hardware innovations will become increasingly integral and valuable as the interface for tomorrow’s software. Hardware will capture the data through wearables, hearables, cameras and an increasing variety of sensors and will then be leveraged as the outputs to interact with the world as robots, drones, and the myriad of other IoT devices that are being developed.
As the ecosystem of devices, computation, connection, and data evolve, the platforms, tools and systems are naturally finding more synergy and lowering the barriers of integration. The line between what is a hardware or software product will continue to blur. The sensor technologies leading the way are cameras and microphones. If there is a camera, there’s a good chance there’s an AI stack behind it, with self-driving cars being the most prominent example. On the microphone/speaker side, the Smart Home assistants Amazon Echo, Google Home, and others are obvious and ubiquitous.
The beauty of hardware enabled AI/ML is that it not only crosses the boundary between the physical and virtual, but also between analog and digital. It’s particularly valuable when interacting with the world and dealing with its messy data. The next generation of AI hardware startups will take all that messy analog data and transform it into productive and executable knowledge that provide better experiences all the way from shopping to cancer treatments that enable personalized health care at scale.
While the future is clear, the hurdles are as well. Processing power, robustness, generalization and cost are all tradeoffs future hardware products will need to balance. Unlike the on-demand and scalable cloud and other services software enjoys, each hardware product will have onboard processing, sensors, connectivity tech, and other requirements that all make their way into the product cost. Sure a GPU can be thrown into the BOM, but can the market accept the price? High performance computing at the edge is still in its nascent stages so real-time processing of images and other data can be expensive as well.
At the same time, specific tools are being developed to improve this integration. We’re seeing a lot of edge computing such as NVIDIA’s Jetson line and Google’s Edge TPUs. TensorFlow is probably the most common AI framework, since it has such broad support for hardware deployment, including Raspberry Pi. ROS is still fairly popular despite being a jumble of mismatched and complicated software, and people have done ports to OpenAI’s Gym environments.
The future of hardware is bright and full of highly accessible, processed, happiness-inducing data.
Join 600 hardware innovators, entrepreneurs, disruptors and investors at HardwareCon 2019, the premier event for hardware innovation. Plan to attend April 17-18 at the Computer History Museum in Mountain View, California. Use promo code: GIGA-OM-IL for a special 20% discount on the ticket. Visit www.hardwarecon.com to redeem the discount.
Author Bio
Greg Fisher is all about hardware innovation. As founder/CEO of Berkeley Sourcing Group, Greg has spent the last 13 years working with over 1000 hardware startups to develop and manufacture innovative products. Living in China one third of that time, he worked with hardware startups and factories to help improve their designs for manufacturing, qualify and select factories, manage factory negotiations and relationships, and develop and implement quality control processes. With this history, Greg has a unique perspective and immense passion for what it takes for hardware startups to build the right foundation and scale their operations.
Seeing the need for more support for hardware startups to realize success, Greg started Hardware Massive, which is now the leading Global Community/Platform for Hardware Startup Innovation, and HardwareCon, the Bay Area’s premier hardware innovation conference. Their missions are to empower hardware startups to succeed through networking, events, education, and providing access to resources.
0 notes
Text
How IBM is Rethinking its Data Protection Line-Up
Following up on my take on the evolution of product and strategy of companies like Cohesity and NetApp, today I’d like to talk about IBM and its new data protection solution Spectrum Data Protect Plus.
IBM AND ITS DATA PROTECTION LINE-UP
In short, not too long ago IBM changed the names of all its storage products. I totally understand why they did it and it makes a lot of sense from a marketing point of view, but it is still confusing for people like myself that were familiar with the products before this change. Besides, with products now having similar names, it could be difficult to discern who does what.
In this particular case, data protection, you now have two products:
IBM Spectrum Data Protect: the good, old, TSM. While this product is one of those that have written Backup’s history and supports a myriad of Operating Systems and applications as well as backup, it is complex to operate and designed for large environments. Furthermore, it was designed well before the advent of hypervisors and modern applications, making it really tough to protect this environment efficiently.
IBM Data Protect Plus: a new product designed from the ground up for modern environments, including hypervisors, NoSQL DBs and more. It has a very modern snapshot-based design that pairs nicely with VMWARE CBT (change block tracking) for example. It’s easy to use and can be adopted by IT organizations of all sizes.
youtube
Videos from SFD18 can give you a good idea of the features and the potential of IBM Spectrum Data Protect Plus and there are a few aspects that I think are interesting to note:
IBM Spectrum Data Protect Plus might be a good companion for IBM TSM customers. Although the two don’t share anything, it is still an IBM product and from the procurement and budgetary standpoints, it could be much easier to adopt this solution instead of others.
Licensing is pretty flexible, making this product competitive from a cost standpoint on smaller infrastructures too. And this also makes it easier to place it in large infrastructures, aligning the cost with what is actually being protected.
Data Protect Plus is not at the level of features you can find on more mature products like Veeam, but the Data Protect Plus team is very committed and have a very aggressive release schedule.
This product has a good, scalable, architecture and the roadmap shows great potential for future releases, especially when it comes to sophisticated features around data reuse and management.
youtube
CLOSING THE CIRCLE
As I wrote above, Data Protect Plus might be a good option for IBM customers that already have TSM for their legacy infrastructure. What the IBM Spectrum Data Protect family is lacking the most for this type of customer at the moment, is a sort of unified GUI to allow SysAdmins to speed up operations and have better control of the backup infrastructure. But, as far as I can tell, it seems I’m not the first one to note this deficiency … the development team is already looking into it.
0 notes
Text
NetApp NDAS Integrate On-Premises & Cloud as One
At the end of February 2019, at Storage Field Day 18, NetApp presented another tool aimed at integrating its on-premises solutions with the cloud, NetApp Data Availability Service (NDAS). As already mentioned in a previous post, this tool might be somewhat immature, but it has huge potential if developed in the right way.
TWO WORDS ABOUT NDAS
Long story short, NDAS takes advantage of snap mirror functionalities available on NetApp Arrays and syncs volumes to the cloud. The cool part is that the content of the volumes is converted into objects (taking advantage of AWS S3). In the short term, it’s all about saving money because S3 is way cheaper than Elastic Block Storage (EBS), but the real deal comes from the fact that data stored in this format is much more re-usable for a major number of use cases, including index and search, ransomware protection, analytics and so on. Take a look at the videos recorded during their SDF18 session to have an idea of what I’m talking about.
youtube
NETAPP IS (ALSO) A CLOUD COMPANY
NetApp already had me a few years back when they presented their Data Fabric vision. But, as is the case for any other vision, it’s only good on paper until it gets executed properly. If I needed confirmation about how they are executing, NDAS, and especially the speech that Dave Hitz gave at the beginning of the session, were what sustained my excitement level about them. He gave clear examples of their cloud products, their approach with customers, of partnerships that are all-in with cloud, and of cloud-native end users — all while keeping an eye on traditional customers and how to support them in their journey to the cloud.
youtube
The risk they are taking is to cannibalize some of their on-premises sales or, better, their traditional product sales … but this is paying off, both in terms of mindshare as well as overall results. And if we look at company growth and financial results in the last three years, they seem nothing but positive.
CLOSING THE CIRCLE
Many storage vendors are now more cloud-focused than in the past. NetApp just started sooner and had the courage to disrupt its internal status quo, they listened to end users, hired people with a different mindset, designed new cloud-focused products and services; but they also opened their core products to better cloud integration. And this is paying off big time.
Usually, you can expect this kind of turnaround from smaller, younger, and nimbler companies but it’s always refreshing to see it happening to those of NetApp’s size.
Originally posted on Juku.it
0 notes
Text
Voices in AI – Episode 83: A Conversation with Margaret Mitchell
[voices_in_ai_byline]
About this Episode
Episode 83 of Voices in AI features host Byron Reese and Margaret Mitchell discussing the nature of language and it’s impact on machine learning and intelligence.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Margaret Mitchell. She is a senior research scientist at Google doing amazing work. And she studied linguistics at Reed College and Computational Linguistics at the University of Washington. Welcome to the show!
Margaret Mitchell: Thank you. Thank you for having me.
I’m always intrigued by how people make their way to the AI world, because a lot of times what they study in University [is so varied]. I’ve seen neuroscientists, I see physicists, I see all kinds of backgrounds. [It’s] like all roads lead to Rome. What was the path that got you from linguistics to computational linguistics and to artificial intelligence?
So I followed a path similar to I think some other people who’ve had sort of linguistics training and then go into natural language processing which is sort of [the] applied field of AI, focusing specifically on processing and understanding text as well as generating. And so I had been kind of fascinated by noun phrases when I was an undergrad. So that’s things that refer to person, places, objects in the world and things like that.
I wanted to figure out: is there a way that I could like analyze things in the world and then generate a noun phrase? So I was kind of playing around with just this idea of ‘How could I generate noun phrases that are humanlike?’ And that was before I knew about natural language processing, that was before this new wave of AI interest. I was just kind of playing around with trying to do something that was humanlike, from my understanding of how language worked. Then I found myself having to code and stuff to get that to work—like mock up some basic examples of how that could work if you had a different knowledge about the kind of things that you’re trying to talk about.
And once I started doing that, I realized that I was doing essentially what’s called natural language generation. So generating phrases and things like that based on some input data or input knowledge base, something like that. And so once I started getting into the natural language generation world, it was a slippery slope to get into machine learning and then what we’re now calling artificial intelligence because those kinds of things end up being the methods that you use in order to process language.
So my question is: I always hear these things that say “computers have a x-ty-9% point whatever accuracy in transcription” and I fly a lot. My frequent flyer number of choice has an A, an H and an 8 in it.
Oh no.
And I would say it never gets it right.
Right.
And it’s only got 36 choices.
Right.
Why is it so awful?
Right. So that’s speech processing. And that has to do with a bunch of different things including just how well that the speech stream is being analyzed and the sort of frequencies that are picked up are going to be different depending on what kind of device you’re using. And a lot of times the higher frequencies are cut off. And so words that when [spoken] face to face or sounds that we hear face to face really easily are sort of muddled more when we’re using different kinds of devices. And so that ends up especially on things like telephones cutting off a lot of these higher frequencies that really help those distinctions. And then there’s like just general training issues, so depending on who you’ve trained on and what the data represents, you’re going to have different kinds of strengths and weaknesses.
Well I also find that in a way, our ability to process linguistics is ahead of our ability in many cases to do something with it. I can’t say the names out loud because I have two of these popular devices on my desk and they’ll answer me if I mentioned them, but they always understand what I’m saying. But the degree to which they get it right, like if I say “what’s bigger—a nickel or the sun?” They never get it. And yet they usually understand the sentence.
So I don’t really know where I’m going with that other than, do you feel like you could say your area of practice is one of the more mature, like hey, we’re doing our bit, the rest of you common sense people over there and you models of the world over there and you transfer learning people, y’all are falling behind, but the computational linguistics people—we have it all together?
I don’t think that’s true. And the things you’re mentioning aren’t actually mutually exclusive either, so in natural language processing you often use common sense databases or you’re actually helping to do information extraction in order to fill out those databases. And you can also use transfer learning as a general technique that is pretty powerful in deep learning models right now.
Deep learning models are used in natural language processing as well as image processing as well as a ton of other stuff.
So… everything you’re mentioning is relevant to this task of saying something and having your device on your desktop understand what you’re talking about. And that whole process isn’t just simply recognizing the words, but it’s taking those words and then mapping them to some sort of user intent and then being able to act on that intent. That whole pipeline, that whole process involves a ton of different models and requires being able to make queries about the world and extract information based on… usually it’s going to be the content words of the phrase: so nouns, verbs things that are conveying the main sort of ideas in your utterance and using those in order to find relevant information to that.
So the Turing test… if I can’t tell if I’m talking to a person or a machine, you got to say the machine is doing a pretty good job. It’s thinking according to Turing. Do you think passing the Turing test would actually be a watershed event? Or do you think that’s more like marketing and hype, and it’s not the kind of thing you even care about one way or the other?
Right. So the Turing Test as was originally construed has this basic notion that the person who is judging can’t tell whether or not it’s human-generated or machine-generated. And there’s lots of ways to do that. That’s not exactly what we mean by human level performance. So, for example, you could trivially pass the Turing test if you were pretending to be a machine that doesn’t understand English well, right? So you could say, “Oh this is a this is a person behind this, they’re just learning English for the first time—they might get some things mixed up.”
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
0 notes
Text
Voices in AI – Episode 82: A Conversation with Max Welling
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
About this Episode
Episode 82 of Voices in AI features host Byron Reese and Max Welling discussing the nature of intelligence and its relationship with intuition, evolution, and need.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Max Welling. He is the Vice President, Technologies at Qualcomm. He holds a Ph.D. in theoretical physics from Utrecht University and he’s done postdoc work at Caltech, University of Toronto and other places as well. Welcome to the show Max!
Max Welling: Thank you very much.
I always like to start with the question [on] first principles, which is: What is intelligence and why is artificial intelligence artificial? Is it not really intelligent? Or is it? I’ll start with that. What is intelligence and why is AI artificial?
Okay. So if intelligence is not something that’s easily defined in a single sentence. I think there is a whole broad spectrum of possible intelligence, and in fact in artificial systems we are starting to see very different kinds of intelligence. For instance you can think of a search engine as being intelligent in some way, but it’s a very different kind of intelligence obviously as a human being, right?
So there’s human intelligence and I guess that’s the ability to plan ahead and to analyze the world, to organize information—these kinds of things. But artificial intelligence is artificial because it’s sort of in machines not in human brains. That’s the only reason why we call it ‘artificial.’ I don’t think there is any reason why artificial intelligence couldn’t be the same or very similar to human intelligence. I just think that that’s a very restricted set of intelligence. And we could imagine having a whole broad spectrum of intelligence in machines.
I’m with you [on] all of that, but maybe because human intelligence is organizing information, it’s planning ahead, machines are doing something different like search engines and all that. Maybe I should ask the question: What isn’t intelligence? I mean at some point, doesn’t it lose all its meaning if it’s like it’s kind of… a lot of stuff? I mean like what are we really talking about when we when we come to intelligence? Are we talking about problem solving? Are we talking about adaptation or what? Or is that so meaningless that it has no definition?
Well yeah, it depends on how broad you want to define it. I think it’s not a very well defined term per se. I mean you could ask yourself whether a fish is intelligent. And I think a fish to some degree is intelligent because you know it has a brain, it processes information, it adapts perhaps a little bit to the environment. So even a fish is intelligent, but clearly it’s a lot less intelligent than a human.
So anything I would say that has the purpose of sensing—sort of acquiring information from its environment, computing from that information to its own benefit. In other words, to survive better is the ultimate goal or to reproduce maybe is the penultimate goal. And so basically, once you’ve taken any information and you compute then you can act—use that information. You can then act on the world in order to bring the world in a state that’s more beneficial for you, right? So that you can survive better, reproduce better. So anything that processes information, I would say in order to reach a goal, in order to achieve a particular goal which in evolution is reproducing or surviving.
But… in artificial systems it could be something very different. In an artificial system, you could still sense information, you could still compute and process information in order to satisfy your customers—which is like providing them with better search results or something like that. So that’s a different goal, but the same phenomenon is underlying it, which is processing information to reach that goal.
Now, and you mentioned adaptation and learning, so I think those are things that are super important parts of being intelligent. So a system that can adapt and learn from its environment and from experiences is a system that can keep improving itself and therefore become more intelligent or better at its task, or adapt when the environment is changing.
So these are really important parts of being intelligent, but not necessary because you could imagine a self-driving car as being completely pre-programmed. It doesn’t adapt, but it still behaves intelligently in the sense that it knows when things are happening, it knows when to overtake other cars, it knows how to avoid collisions, etcetera.
So in short, I think intelligence is actually a very broad spectrum of things. It’s not super well-defined, and of course you can define more narrow things like a human intelligence for instance, or fish intelligence and/or search engine intelligence or something like that, and then it would mean something slightly different.
How far down in simplicity would you extend that? So if you have a pet cat and you have a food bowl that refills itself when it gets empty…it’s got a weight sensor, and when the weight sensor shows nothing in there, it opens something up and then fills it. It has a goal which is: keep the cat happy. Is that a primitive kind of artificial intelligence?
It would be a very, very primitive kind of artificial intelligence. Yes.
Fair enough. And then going back centuries before that, I read the first vending machines, the first coin operated machines were to dispense holy water and you would drop a coin in a slot and the weight of the coin would weigh down a thing that would open a valve, then dispense some water and then, as the water was dispensed, the coin would fall out and it would close off again. Is that a really, really primitive artificial intelligence? Yeah. I don’t know. I mean you can drive these things to an extreme with many of these definitions. Clearly this is some kind of mechanism. I guess when there is sensing and this can sense, there is a bit of sensing because it’s sensing the weight of a coin and then it has a response to that—which is opening something. It’s like a response and sort of completely automatic response, and humans actually have many of these reflexes. If you hit your knee with a hammer, with a paddle of a hammer like the doctor does, your knee jerks up, so that’s actually being done through a nervous system that goes to… doesn’t even reach your brain. I think it’s down here somewhere in your brain in the back of your spine. So it’s very, very, very primitive, but still you could argue it senses something and it acts. It does something, it computes something and it acts. So it’s like the very, very most fundamental simple form of intelligence. Yeah.
So the technique we’re using to make a lot of advances in artificial intelligence, now that computers is machine learning, I guess it’s really a simple idea. Let’s study data about the past. Let’s look for patterns and make projections into the future. How powerful is that technique… what did you think are the inherent limits of that particular way of gaining knowledge and building intelligence?
Well, I think it’s kind of interesting if you look at the history of AI. So in the old days, there was a lot of AI which was hard coding rules. So you would think about what are the all the eventualities which you could encounter. And for each one of those, you would sort of program a response as an automatic response to those. And those systems did not necessarily look at data in large amounts from which they would learn patterns and learn to respond.
In other words, it was all up to humans to figure out what are the relevant things to look at, to sense, and how to respond to them and if you make enough of those, actually a system like that looks like it’s behaving quite intelligently and actually still I think nowadays, self-driving cars… a large component of these cars is made of lots and lots of these rules which are hardcoded in the system. And so if you have many, many of these really primitive pieces of intelligence together, they might look like they act quite intelligently.
Now there is a new paradigm which is: it’s always been there, but it’s been basically becoming the dominant mainstream in AI. The new paradigm I would say, which is: ‘Well, why are we actually trying to hand code all of these things which we should sense in there by hand because basically you can only do this to the level of what the human imagination actually is able to come up with, right?”
So if you think about detecting some… let’s say if somebody is suffering from Alzheimer’s from a brain MRI, well you can look at like the size of your hippocampus and it’s known that that thing shrinks—that organ shrinks if you are starting to suffer from memory issues which are correlated with Alzheimer’s. So that a human can think about that and put this in as a rule, but it turns out that there’s many, many more far more subtle patterns in that MRI scan. And if you sum all of those up, then actually you can get a much better prediction.
But humans, they wouldn’t be able to even see those subtle patterns because it’s like if this brain region and this brain region and this brain region, but not that brain region, would sort of have this particular pattern. Then you know this is a little bit of evidence in favor of like Alzheimer’s and then hundreds and hundreds of those things. So that humans lack the imagination or the sort of the capacity to come up with all of these rules. And we basically discovered that just provide a large data set and let the machine itself figure out what these rules are instead of trying to hand code them in. And this is the big change for instance with deep learning as well, [as] in computer vision and speech recognition.
Let’s first do computer vision. People have many hand coded features that they would try to identify on the image. Right. And then from there they would make predictions or for there’s some whether there was a person in the image or something like that. But then we basically said, “Well let’s just throw all the pixels, all the raw pixels at a neural nets. This is a convolution of neural net and let the neural nets figure out what are the right features. Let this neural net learn what the right features are to attend to when it needs to do a certain task.” And so it works a lot better, again because there’s many very subtle patterns that it now learns to look at which humans simply didn’t think of to look at—they seem to look at these things.
Now another example is the Alpha Go, maybe. In Alpha Go something similar happened. Humans have analyzed this game and come up with all sorts of rules of thumb for how to play the game. But then Alpha Go figured out things that humans can’t comprehend, it’s too complex. But still it made the algorithm win the game.
So I would say it’s a new paradigm that goes well beyond trying to hand code human invented features into a system and therefore it’s a lot more powerful. And in fact this is also the way of course humans work. And I don’t see a real limit to this, right? So if you pump more data through it, in principle you can learn a lot of things—or well basically everything you need to learn in order to become intelligent.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
0 notes
Text
DEMOCRATIZING DATA MANAGEMENT
Lately, I’ve written a lot about data-management for unstructured data and more in general, about the relationship between data management and secondary storage. While recently attending Storage Field Day 18, I received confirmation on what is coming in the near future for date management technologies. Simplification and democratization of data management will be key for end-user adoption and success.
DATA MANAGEMENT, A SORT OF BUZZ WORD
Unfortunately, the term ‘data management’ is becoming a buzz word among vendors, especially when it comes to data protection vendors. Some backup vendors are replacing Data Protection with the term Data Management when describing their services in marketing material. Although it’s becoming their main message, when you ask them to elaborate on the data management aspects they are challenged to articulate how their services align with the terminology.
Yes, there are exceptions. Some vendors have very clear roadmaps. But, as it often happens in this industry, it seems many of vendors are counting their chickens before they’re hatched.
WHAT DOES “DEMOCRATIZING DATA MANAGEMENT” MEAN?
Despite the vendors still refining their services and messaging, I saw a few exceptions and providers with very clear roadmaps at SFD18.
The exception is Cohesity. I have long been confident they’re heading in the right direction with their strategy and product, and I have noted the promise of the Cohesity Analytics Workbench; a tool that has a great, but only theoretical, potential. In a new step forward announced at SFD18, Cohesity’s system can now run full-fledged applications, and the analytics workbench will soon become a thing of the past.
As I said, the Analytics Workbench was a great idea but the name of this tool tells the real story. Workbench means a lot of work and this is why, even if it’s exciting, it can’t be broadly adopted. It is powerful, based on Hadoop, but you need to know how to use it and how to write applications. I’m sure it has been used, but the reality is that for the traditional enterprise this is only cool on paper. And finding somebody that can write and maintain a big data application is not at all easy! Especially if there is no direct business return from it.
Standard apps that run on Cohesity’s platform are a totally different thing. Easy to use, deployed and managed transparently by the platform itself and, above all, ready to prove their value in a matter of minutes. The app catalog, or marketplace, doesn’t have many solutions yet, but some of them come from Cohesity partners like Splunk or Imanis Data for example.
Without going into the technical details (videos of the sessions are available also on youtube), let’s say that Cohesity demonstrated how quickly an app can be deployed and used, and all without being a data scientist or a developer. You just take a snapshot, run the app against a copy of your data, get the result, act accordingly. And it can be automated! Think about virus scanning, ransomware protection, log analytics, or advanced DB management (and I’m not using my imagination here because these are already available!).
Now the challenge for Cohesity is to involve more partners and to build a solid ecosystem. I recommended they release all the components to the open source community and try to standardize these apps for all vendors, making the catalog really big… but I know this is pure wishful thinking at the moment.
The idea of giving the average user this kind of power is amazing and NetApp is on the same wavelength, showing a pretty exciting potential roadmap. At SFD18 they presented an interesting solution which allows using standard replication tools (SnapMirror, for those familiar with NetApp’s portfolio) to make copies of data on the cloud. The product is still very immature but the potential is huge. In fact, alongside the standard use cases for remote replication, without needing a second NetApp appliance in this case, there are other possibilities ready to be exploited and leveraged to augment the value of data stored in these systems (and on the cloud).
In short, if you don’t want to watch the video embedded above, they provide a management tool (a GUI to simplify a bit) that runs on your Amazon account and can use SnapMirror to copy data directly to Amazon AWS and convert it in objects stored in an S3 bucket. Files and metadata are accessible and searchable, but this is only the first step. During the session, they demonstrate a custom application that can access a copy of that data and do operations on it and any enabled user on that platform could do the same. More or less we are talking about a standard S3 bucket, available on Amazon, that you can use as a data set for any application. Unfortunately, as was for Cohesity with its Analytics Workbench, only pre-packaged and easy-to-use applications will unleash the full potential of this solution when it comes to day-to-day data management.
WHAT IS THE BENEFIT?
Making data easily accessible and re-usable by a large number of individuals in your organization is the real benefit here. They could run different applications, each one of them for different reasons, and get insights needed to improve productivity, security, privacy and so on.
At the end of the day, we’re talking about a sort of revolution here. We are not there yet of course, but we are finally seeing how data can be effectively reused without having a Ph.D. in computer sciences, being proficient in MapReduce and Java, or any other programming language!
Yes, as I said, we’re not there yet, but neither are we very far from achieving this goal now, and I’m sure that Cohesity and Netapp’s initiatives will soon be followed by others.
CLOSING THE CIRCLE
Cohesity and NetApp are executing their respective strategies superbly. On one side, you have NetApp becoming more and more cloud-ish and more data- than storage-centric while, on the other you have Cohesity pushing aggressively on its secondary storage vision with products and solutions that are absolutely spot on.
In the next weeks, I’ll be spending some time analyzing what went on during their sessions as well as other moments I spent with them recently. I’ll be sharing my thoughts with you… so stay tuned!
Originally posted on Juku.it
0 notes
Text
Voices in AI – Episode 81: A Conversation with Siraj Raval
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
About this Episode
Episode 81 of Voices in AI features host Byron Reese and Siraj Raval discussing how teaching AI to the world can help improve the quality of life for everyone, and what the footfalls along the way are.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Siraj Raval. He is the director of the School of AI. He holds a degree in computer science from Columbia University. Welcome to the show, Siraj.
Siraj Raval: Thank you so much for having me, Byron.
I always like to start off with just definitions. What is artificial intelligence and specifically what’s artificial about it?
That’s a great question. So, AI, Artificial Intelligence is actually… I like to think of it as a giant circle. I’m a very visual person so just imagine a giant circle and we’ll label that circle AI, okay? Inside of the circle, there is a smaller circle, and this would be the subfield of the eye. One of them would be heuristics. These are statistical techniques to try to play games a little better.
When Garry Kasparov was defeated by big blue — that was using heuristics. There’s another bubble inside of this bigger AI bubble called machine learning and that’s really the hottest area of AI right now and that’s all about learning from data. So there’s heuristics, there’s learning from data — which is machine learning — and there is deep learning as well, which is a smaller bubble inside of machine learning. So AI is a very broad term. And people in computer science are always arguing about what is AI, what isn’t AI? But for me, I like to keep it simple. I think of AI as any kind of machine that mimics human intelligence in some way.
Well hold on a minute though, you can’t say artificial intelligence is a machine that mimics human intelligence because you’re just defining the word with what we’re trying to get at. So what’s intelligence?
That’s a great question. Intelligence is the ability to learn and apply knowledge. And we have a lot of it. Well, some of us anyway (just kidding)
That’s interesting because of AlphaGo — the emphasis on it being able to learn is a pretty high bar. Something like my cat food dish that refills itself when the cat eats all the food, that isn’t intelligent in your book, right? It’s not learning anything new. Is that true?
Yeah. So it’s not learning. So there has to be some kind of feedback, some kind of response to stimulus, so whether that’s from data or whether that’s a statistical technique based on the number of wins versus losses, did this work, did this not work? It’s got to have this feedback loop of something outside of it being external to it is affecting it. In the way that we perceive the world, something external to our heads and that affects how we act in the world.
So [take] the smartest program in the world. Once it’s instantiated as a single program, is no longer intelligent. Is that true? Because it stopped learning at that point. It can be as sophisticated as can be, but in your mind, if it’s not learning something new it’s not intelligent.
That’s a good question. Well, I mean, the point at which it would not need to learn or there would be nothing for it to learn would be the point in which, to get ‘out there,’ it saturates the entire universe.
Well, no. I mean like, let’s take AlphaGo. Let’s say they decide, let’s put out an iPhone version of Go and let’s just take the latest and greatest version of this. Let’s make a great program that plays Go. At that point it is no longer AI, if we rigidly follow your definition because it stopped learning, it’s now frozen in capability. Yeah, I can play it a thousand times in a game 1001 it’s not doing any better.
Sure. Okay, but to stick to my rigid definition, I’ve said that intelligence is the ability to learn and apply knowledge.
Right.
That we will be doing in the latter part.
Do you think that it’s artificial in that it isn’t really intelligence, it just looks like it? Is what a computer does actually intelligent or is it mimicking intelligence? Or is there a difference between those two things?
There are different kinds of intelligences in the world. I mean, think of it like a symphony of intelligences like, our intelligence is really good at doing a huge range of tasks, but a dog has a certain type of intelligence that keeps it more aware of things than we would be, right? Dogs have superhuman hearing capability. So in that way a dog is more intelligent than us for that specific task. So when we say ‘artificial intelligence,’ you know, talking about the AlphaGo example, that algorithm is better than any human on the planet for that specific task. It’s a different kind of intelligence. ‘Foreign,’ ‘alien,’ ‘artificial’ — you know, all of those words would kind of describe its capability.
You’re the Director of School of AI. What is that? Tell me the mission and what you’re doing.
Sure. So I’ve been making educational videos about AI on YouTube for the past couple of years and I had the idea about nine months ago, to have this call to action for people who watch my videos. And I had this idea of saying, ‘Let’s start an initiative where I’m not the only one teaching but there are other people, and we’ll call ourselves The School of AI and we have one mission which is to teach people how to use AI technology for the betterment of humanity for free.’
And so we’re a non-profit initiative. And since then, we have, what are called ‘deans.’ It’s 800 of them spread out across the world, across 400 cities globally. And they’re teaching people in their local communities from Harare, Zimbabwe to Zurich to parts of South America. It’s a global community. They’re building their local schools, Schools of AI, you know, School of AI Barcelona, what have you, and it’s been an amazing, amazing couple of months. It feels like every day I wake up, I look in our side channel, I see a picture of a bunch of students in, say, Mexico City and our school there, our logo there and it’s like, “Is this real?” But it is real. Yeah, it’s been a lot of fun so far.
Put some flesh on those bones. What does it mean to learn… what are people learning to do?
Right. So the guideline that we’re following — we’re talking about the betterment of humanity — are the 17 sustainable development goals (SDGs) outlined by the United Nations. One of them would be no poverty, no extreme poverty, sustainable action on the climate, things like that. Basically trying to fulfill the basic needs for humans both in developed and developing countries so that eventually we can all reach that stage of self-actualization and be able to contribute and create and discover, which is what I think we humans are best at. Not doing trivial laborious repetitive tasks. That’s what machines are good for. So if we can teach our students, we call them ‘wizards,’ if we can teach our wizards how to use a technology to automate all of that away, then we can get to a world where all of us are contributing to the betterment and the progress of our species whether it’s in science or art, etcetera.
But specifically, what are people learning to do like on a day to day basis?
One example would be classifying images, and that’s a very generic example, but we can use that example to say, help farmers in parts of South Africa to detect plants that are diseased, or that are not diseased. Another example would be anomaly detection. So kind of finding the needle in the haystack. What here doesn’t fit in with the rest? And that can be applied to fraud detection, right? If you’ve got thousands and thousands of transactions, and one of them is a fraud, and AI can learn ‘what fraud is’ better than any human could because it’s just so much data. That’s just two, I can get some more. There’s quite a lot but I think that…
No, but I mean, what’s the clue… so it’s the idea that there just aren’t enough people that have the basic skills to “do AI” and you’re trying to fill that gap?
That is what it is. And yeah, in that the concepts behind this technology, the mathematical concepts I don’t believe are accessible yet to a wide enough audience. So we at School of AI are trying to broaden that audience and trying to make it accessible not just to developers but eventually to everybody. You know, moms, dads, grandmas, grandpas, people who — just they’re not like the most technical people — we’re trying to reach them and make this something that everybody does, because we sincerely believe that this is going to be a part of our lives and eventually everybody is going to be implementing AI in some way or another.
It doesn’t necessarily have to be code. It can be through some application or some kind of ‘drag and drop’ interface, but it’s definitely in the future of work. So yes, that’s what it is. And also it’s the fact that we are facing so many huge problems, daunting problems as a species — existential threats. And we think we might not be good enough alone to solve these problems. Climate change, for example: a lot of people think that it’s too late to solve climate change, but we think that we have a huge amount of data available and we think that the answers to some of the hardest problems related to CO2 emission and how we can allocate resources for that goal lie hidden in that data, and using AI we can find them.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
0 notes
Text
The Good, The Bad, and The Ugly of Hyper-Converged Infrastructure (HCI)
HCI is very popular with organizations of all sizes now, but it’s not perfect! A compromise must be found between performance, flexibility, usability, and cost. This applies to most types of infrastructure, hyper-converged infrastructure (HCI) included. What’s more, it is quite hard to find a single storage infrastructure that covers all use cases that need to be addressed.
HCI is a good solution for small organizations, but the architecture imposes some trade-offs, limiting its potential in larger enterprises where there is a sizeable and established environment to support. Put this way, HCI is the perfect example of the 80/20 rule, whereby 80% of the existing workloads are predictably dynamic and a good fit for HCI. The problem is the remaining 20% of your infrastructure is expected to grow exponentially with Artificial Intelligence/ Machine Learning (AI/ML), Internet of Things (IoT), edge computing projects and more —all of which organizations are evaluating now and which will impact business competitivity in the following years.
THE GOOD OF HCI
User-friendliness is what most organizations like about HCI. There is no need to change infrastructure operations, allowing the use of the same hypervisor the team is accustomed to and fewer complications around storage management. The infrastructure is simplified thanks to the modular scale-out approach for which each new single node adds more CPU, RAM, and storage. As a result of this, HCI delivers good TCO figures as well.
THE BAD OF HCI
The limitations of HCI arise from exactly what makes it a good solution for ordinary workloads and virtualization. In fact, it’s not really designed to cover all type of workloads (think about big data for example). In addition, not all applications scale the same, meaning that sometimes different types of resources are needed for your infrastructure (e.g. storage-only nodes for capacity). Last but not least, most HCI products on the market focus on edge or core use cases, but are not able to cover them both concurrently or efficiently at a reasonable cost. These limitations might be of secondary importance today, but in the long term that could radically change with new unforeseen performance, capacity, and technology requirements.
THE UGLY OF HCI
The initial investment to adopt HCI is often pretty high, especially if storage and server amortization cycles are different. Since HCI includes purchasing all server, storage, and networking together, some are forced to choose individual workloads to begin HCI adoption, or finance options, to purchase the entire infrastructure at once. This is merely a financial issue, but I’ve seen it happen several times with customers of all sizes while trying to manage their budget wisely. This can delay and slow down HCI adoption and result in a long transition period that benefits no one.
CLOSING THE CIRCLE
The perfect infrastructure that excels in every aspect does not yet exist. HCI is a good compromise for a lot of workloads but fails with the most demanding ones – and what is usually considered strengths can quickly become weaknesses.
Recently, I had the chance of being briefed by DataCore on their HCI solution. I really like their approach, and I think it could address some of the issues I discuss in this blog. I’ll be hosting a webinar with them in a couple of weeks and we will be talking about how to deploy Hybrid-converged Infrastructures. Yes, you read it right, it’s not hyper- but Hybrid-converged, and if you want to learn more about it sign up and join us. I’m interested in your opinions and will make the webinar as interactive as possible, with quick polls and questions that you’ll be invited to ask the presenters.
Originally posted on Juku.it
0 notes
Text
Five questions for… Keri Gilder, Chief Commercial Officer, Colt Technology Services. Can Connectivity be linked to Customer Experience?
Customer experience, or CX, is one of those areas that makes you wonder why it’s being discussed: after all, which organisation would go out of it way to say that customers were not a priority? Nonetheless, talking about customers can be very different to actually improving how they interact with the business, not least because the link between theory and technical practicality will not always be evident.
In the case of connectivity, the task is even harder. In principle there should be a connection – if you (as a customer) can’t connect to the service you need, or if it is slow or unresponsive, your experience will be less good. In practice however, connectivity is often seen as low-level infrastructure, with little value to add beyond linking things up.
These challenges made our research on the link between connectivity and CX, conducted in partnership with Colt, all the more fascinating. The top-line finding was that organizations did see a link, and furthermore, were actively looking for ways to improve CX via connectivity. Following the research, I sat down with Keri Gilder, Chief Commercial Officer, Colt Technology Services, to find out what she thought of the findings, and what the provider was doing in response.
From the perspective of a connectivity provider, how are you framing the increasing attention on customer experience? How is it impacting both your wholesale and enterprise customers, and what do you think is behind this?
Customers in all sectors are demanding much more from their providers – the consumerisation of IT isn’t a new trend but it’s still highly relevant. People look at the flexibility and service they get from consumer facing companies and are asking why that doesn’t apply to their B2B suppliers. Many telco companies have been slow to adapt to these demands, so the result is that connectivity can be treated as a commodity rather than a differentiator.
Our customers are dealing with massive change, from the growth in cloud applications and the changing structure of the workplace, to security challenges and the constant state of digital transformation. This means the network becomes even more critical for those with a focus on delivering the best experience to customers.
When customers are dealing with these challenges it’s not good enough to sit back and wait for them to tell us what they need – we need to work together to help shape requirements, acting as advisors instead of just a supplier.
In the report, we saw a number of challenges getting in the way of improving CX delivery, not least how difficult it is to draw a clear picture of what customer experience actually means. How is this manifesting itself in the organisations you speak to?
A ‘good’ customer experience can mean different things to different people and sectors, so it’s not a surprise to see people struggling to identify the best course of action. To some degree it’s the obvious things that people expect – delivering quickly and on time, while ensuring they have access to the information they need.
But for suppliers it’s also about putting yourself in the customer’s shoes; what challenges are they facing and what are their customers demanding of them? From there it’s easier to see how to make a difference to their business and, in turn, how you can improve their experience of working with you.
A fascinating and repeated finding was that enterprises want connectivity that ‘just works’ from the outset, whether or not it has more advanced capabilities such as flexibility over time. How does this map onto what your customers are asking for?
Our customers have always expected connectivity that just works – the challenge we’re seeing now is that it’s much harder to predict network demand for the coming years or even months. CIOs are having to manage capacity requirements for applications or activities that might not even be on their radar and that’s driving a need for flexibility. This shows how connectivity can directly impact customer experience goals – if the network can’t manage these new services or if it doesn’t have the ability to quickly add new locations or services then it’ll be seen as a barrier, rather than a platform for innovation.
Also interesting was the low level of importance assigned to Net Promoter Scores (NPS). Is it that such metrics have had their time, or how else would you explain this? [Probably that NPS is an aggregated view, of the consequences of other metrics]
We closely track our NPS score – it’s an excellent way for us to measure ourselves as it covers so many aspects of what we provide to customers. But we know it isn’t and shouldn’t be the only measure of good customer experience. It’s the other factors identified in the research like delivering on time and how you respond if something goes wrong.
If you don’t deliver on promises, meet expectations or go above and beyond to keep the customer happy then you won’t score highly. I don’t think it was a surprise to see that people don’t use NPS as a way to measure their suppliers, but if suppliers are getting everything else right, then their NPS score will naturally improve.
Respondents told us that the most important way to improve the link between connectivity and CX, was to get their own houses in order, improving skills sets and operational processes. How is Colt as an organisation helping its customers achieve this goal?
We’ve always been focussed on customer experience, and our vision is to be known as the most customer-oriented businesses in the industry. This means that we need to do much more than providing connectivity to our customers. Whether that’s Enterprise, Capital Markets or Wholesale, it’s about working in partnership with our customers to find out what their goals are and then collaborating to show how we can help achieve them.
A crucial part of achieving this comes from listening to our customers and taking the time to understand the challenges they’re facing; one way in which we do this is through Innovation Workshops. These take part in the early stages of an engagement, bringing together multiple stakeholders with Colt experts to fully understand the broader business problems and how we can use technology to solve them. This means we’re providing more than just technology – we’re helping customers with their business objectives.
The other aspect is in leading from the front – everyone at Colt has a performance objective relating to customer experience. We also have several internal programs running which don’t just superficially look at customer experience but are seeing the business invest in new tools and create new processes to ensure we’re going above and beyond what people expect from a connectivity supplier.
0 notes
Text
Cloud Storage Is Expensive? Are You Doing it Right?
In my day to day job, I talk to a lot of end users. And when it comes to the cloud, there are still many differences between Europe and the US. The European cloud market is much more fragmented than the American one for several reasons, including the slightly different regulations in each country. Cloud adoption is slower in Europe and many organizations still like to maintain data and infrastructure in their premises. The European approach is quite pragmatic, and many enterprises take somewhat advantage of the experiences made by similar organizations on the other side of the pond. One similarity is cloud storage or, better, cloud storage costs and reactions.
The fact that data is growing everywhere at an incredible pace is nothing new, and often faster than predicted in the past years. At first glance, an all-in cloud strategy looks very compelling, low $/GB, less CAPEX and more OPEX, increased agility and more, until of course your cloud bill starts growing out of control.
As I wrote in one of my latest reports, “Alternatives to Amazon AWS S3”, the $/GB is the first item on the bill, there are several others, including egress fees that come after that. An aspect that is often initially overlooked at the beginning and has unpleasant consequences later.
There are at least two reasons why a cloud storage bill can get out of control:
The application is not written properly. In fact, someone wrote or migrated an application that is not specifically designed to work in the cloud and is not resource savvy. This happens often with legacy applications that are migrated as-is. Sometimes it’s hard to solve because re-engineering an old application is simply not possible. In other cases, the application behavior could be corrected with a better understanding of the API and the mechanisms that regulate the cloud (and how they are charged).
There is nothing wrong with the workload, it’s just that data is being created, read and moved around more than in the past.
OPTIMIZATION
Start by optimizing the cloud storage infrastructure. Many providers are adding additional storage tiers and automations to help with this. In some cases, it adds some complexity (someone must manage new policies and ensure they work properly). Not a big deal but probably not a huge saving either.
Also, try to optimize the application. But that is not always easy, especially if you don’t have control on the code and the application wasn’t already written with the intent to run in a cloud environment. Still, this could pay off in the mid- to long term, but are you ready to invest in this direction?
BRING DATA BACK…
A common solution, adopted by a significant number of organizations now, is data repatriation. Bringing back data on premises (or a colocation service provider), and accessing it locally or from the cloud. Why not?
At the end of the day, the bigger the infrastructure the lower the $/GB and, above all, no other fees to worry about. When thinking about petabytes, there are several ways to optimize and take advantage of which can lower the $/GB considerably: fat nodes with plenty of disks, multiple media tiers for performance and cold data, data footprint optimizations, and so on, all translating into low and predictable costs.
At the same time, if this is not enough, or you want to keep a balance between CAPEX and OPEX, go hybrid. Most storage systems in the market allow to tier data to S3-compatible storage systems now, and I’m not talking only about object stores – NAS and block storage systems can do the same. I covered this topic extensively in this report but check with your storage vendor of choice and I’m sure they’ll have solutions to help out with this.
…OR GO MULTI-CLOUD
Another option, that doesn’t negate what is written above, is to implement a multi-cloud storage strategy. Instead of focusing on a single-cloud storage provider, abstract the access layer and pick up what is best depending on the application, the workloads, the cost, and so on, all determined by the needs of the moment. Multi-cloud data controllers are gaining momentum with big vendors starting to make the first acquisitions (RedHat with NooBaa for example) and the number of solutions is growing at a steady pace. In practice, these products offer a standard front-end interface, usually S3 compatible and can distribute data on several back-end repositories following user-defined policies. This leaves the end user with a lot of freedom of choice and flexibility regarding where to put (or migrate) data while allowing to access it transparently regardless of where it’s stored. Last week, for example, I met with Leonovus which has a compelling solution that associates what I just described to a strong set of security features.
There are several alternatives to major service providers when it comes to cloud storage, some of them focus on better pricing, and lower or no egress fees, while others work on high performance too. As I wrote last week in another blog, going all-in with a single service provider could be an easy choice at the beginning but a huge risk in the long term.
CLOSING THE CIRCLE
Data storage is expensive and cloud storage is no exception. Those who think they will save money by just moving all of their data to the cloud as-is are making a big mistake. For example, cold data is a perfect fit for the cloud, thanks to its low $/GB, but as soon as you begin accessing it over and over again the costs can rise to an unsustainable level.
To avoid dealing with this problem later, it’s best to think about the right strategy now. Planning and executing the right hybrid or multi-cloud strategy can surely help to keep costs under control while giving that agility and flexibility needed to preserve IT infrastructure, therefore business, competitivity.
To learn more about multi-cloud data controllers, alternatives to AWS S3, and two-tier storage strategy, please check my reports on GigaOm. And subscribe to Voices in Data Storage Podcast to listen to the latest news, market, and technology trends with opinions, interviews and other stories coming from the data and data storage field
Originally posted on Juku.it
0 notes
Text
GigaOm Infographic: Connectivity and Customer Experience
How can businesses use connectivity to drive improved CX for their customer? GigaOm asked 350+ strategic enterprise decision-makers from North America and Europe to share their experiences. Check out the infographic below and then read the full Research Byte here.
0 notes
Text
Voices in AI – Episode 80: A Conversation with Charlie Burgoyne
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
About this Episode
Episode 80 of Voices in AI features host Byron Reese and Charlie Burgoyne discussing the difficulty of defining AI and how computer intelligence and human intelligence intersect and differ.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought you by GigaOm and I’m Byron Reese. Today my guest is Charlie Burgoyne. He is the founder and CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy. He’s also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie labs, an AI credit company. Charlie holds a master’s degree in theoretical physics from Georgetown University and a bachelor’s in nuclear physics from George Washington University.
I had the occasion to meet Charlie when we shared a stage when we were talking about AI and about 30 seconds into my conversation with him I said we gotta get this guy on the show. And so I think ‘strap in’ it should be a fun episode. Welcome to the show Charlie.
Charlie Burgoyne: Thanks so much Byron for having me, excited to talk to you today.
Let’s start with [this]: maybe re-enact a little bit of our conversation when we first met. Tell me how you think of artificial intelligence, like what is it? What is artificial about it and what is intelligent about it?
Sure, so the further I get down in this field, I start thinking about AI with two different definitions. It’s a servant with two masters. It has its private sector, applied narrowband applications where AI is really all about understanding patterns that we perform and that we capitalize on every day and automating those — things like approving time cards and making selections within a retail environment. And that’s really where the real value of AI is right now in the market and [there’s] a lot of people in that space who are developing really cool algorithms that capitalize on the potential patterns that exist and largely lay dormant in data. In that definition, intelligence is really about the cycles that we use within a cognitive capability to instrument our life and it’s artificial in that we don’t need an organic brain to do it.
Now the AI that I’m obsessed with from a research standpoint (a lot of academics are and I know you are as well Byron) — that AI definition is actually much more around the nature of intelligence itself, because in order to artificially create something, we must first understand it in its primitive state and its in its unadulterated state. And I think that’s where the bulk of the really fascinating research in this domain is going, is just understanding what intelligence is, in and of itself.
Now I’ll come kind of straight to the interesting part of this conversation, which is I’ve had not quite a hundred guests on the show. I can count on one hand the number who think it may not be possible to build a general intelligence. According to our conversation, you are convinced that we can do it. Is that true? And if so why?
Yes… The short answer is I am not convinced we can create a generalized intelligence, and that’s become more and more solidified the deeper and deeper I go into research and familiarity with the field. If you really unpack intelligent decision making, it’s actually much more complicated than a simple collection of gates, a simple collection of empirically driven singular decisions, right? A lot of the neural network scientists would have us believe that all decisions are really the right permutation of weighted neurons interacting with other layers of weighted neurons.
From what I’ve been able to tell so far with our research, either that is not getting us towards the goal of creating a truly intelligent entity or it’s doing the best within the confines of the mechanics we have at our disposal now. In other words, I’m not sure whether or not the lack of progress towards a true generalized intelligence is due to the fact that (a) the digital environment that we have tried to create said artificial intelligence in is unamenable to that objective or (b) the nuances that are inherent to intelligence… I’m not positive yet those are things through which we have an understanding of modeling, nor would we ever be able to create a way of modeling that.
I’ll give you a quick example: If we think of any science fiction movie that encapsulates the nature of what AI will eventually be, whether it’s Her, or Ex Machina or Skynet or you name it. There are a couple of big leaps that get glossed over in all science fiction literature and film, and those leaps are really around things like motivation. What motivates an AI, like what truly at its core motivates AI like the one in Ex Machina to leave her creator and to enter into the world and explore? How is that intelligence derived from innate creativity? How are they designing things? How are they thinking about drawings and how are they identifying clothing that they need to put on? All these different nuances that are intelligently derived from that behavior. We really don’t have a good understanding of that, and we’re not really making progress towards an understanding of that, because we’ve been distracted for the last 20 years with research in fields of computer science that aren’t really that closely related to understanding those core drivers.
So when you say a sentence like ‘I don’t know if we’ll ever be able to make a general intelligence,’ ever is a long time. So do you mean that literally? Tell me a scenario in which it is literally impossible — like it can’t be done, even if you came across a genie that could grant your wish. It just can’t be done. Like maybe time travel, you know — back in time, it just may not be possible. Do you mean that ‘may not’ be possible? Or do you just mean on a time horizon that is meaningful to humans?
I think it’s on the spectrum between the two. But I think it leans closer towards ‘not ever possible under any condition.’ I was at a conference recently and I made this claim which admittedly as any claim with this particular question would be based off of intuition and experience which are totally fungible assets. But I made this claim that I didn’t think it was ever possible, and something the audience asked me, well, have you considered meditating to create a synthetic AI? And the audience laughed and I stopped and I said: “You know that’s actually not the worst idea I’ve been exposed to.” That’s not the worst potential solution for understanding intelligence to try and reverse engineer my own brain with as little distractions from its normal working mechanics as possible. That may very easily be a credible aid to understanding how the brain works.
If we think about gravity, gravity is not a bad analog. Gravity is this force that everybody and their mother who’s older than, you know who’s past fifth grade understands how it works, you drop an apple you know which direction it’s going to go. Not only that but as you get experienced you can have a prediction of how fast it will fall, right? If you were to see a simulation drop an apple and it takes twelve seconds to hit the ground, you’d know that that was wrong, even if the rest of the vector was correct, the scaler is off a little bit. Right?
The reality is that we can’t create an artificial gravity environment, right? We can create forces that simulate gravity. Centrifugal force is not a bad way of replicating gravity but we don’t actually know enough about the underlying mechanics that guide gravity such that we could create an artificial gravity using the same techniques, relatively the same mechanics that are used in organic gravity. In fact it was only a year and a half ago or so closer to two years now where the Nobel Prize for Physics was awarded to the individuals who identified that it was gravitational waves that permeate gravity (actually that’s how they do gravitons), putting to rest an argument that’s been going on since Einstein truly.
So I guess my point is that we haven’t really made progress in understanding the underlying mechanics, and every step we’ve taken has proven to be extremely valuable in the industrial sector but actually opened up more and more unknowns in the actual inner workings of intelligence. If I had to bet today, not only is the time horizon on a true artificial intelligence extremely long-tailed but I actually think that it’s not impossible that it’s completely impossible altogether.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
0 notes
Text
Isn’t It Time to Rethink Your Cloud Strategy?
Last year, at Re:invent, Amazon AWS launched Outpost and finally validated the concept of hybrid-cloud. Not that it was really necessary, but still…
At the same time, what was once defined as cloud-first strategy (with the idea of starting every new initiative on the cloud often with a single service provider), today is evolving into a multi-cloud strategy. This new strategy is based on a broad spectrum of possibilities that range from deployments on public clouds to on-premises infrastructures.
Purchasing everything from a single service provider is very easy and solves numerous issues but, in the end, this means accepting a lock-in that doesn’t pay off in the long run. Last month I was speaking with the IT director of a large manufacturing company in Italy who described how over the last few years his company had enthusiastically embraced one of the major cloud providers for almost every critical company project. He reported that the strategy had resulted in an IT budget out of control, even when taking into account new initiatives like IoT projects. The company’s main goal for 2019 is to find a way to regain control by repatriating some applications, building a multi-cloud strategy, and avoiding past mistakes like going “all in” on a single provider.
There Is Multi-Cloud and Multi-Cloud
My recommendation to them was not to merely select a different provider for every project but to work on a solution that would abstract applications and services from the infrastructure. Meaning that you can buy a service from a provider, but you can also decide to go for raw compute power and storage and build your own service instead. This service will be optimized for your needs and will be easy to replicate and migrate on different clouds.
Let’s make an example here. You can have access to a NoSQL database from your provider of choice, or you can decide to build your NoSQL DB service starting from products which are available in the market. The first is easier to manage, whereas the latter is more flexible and less expensive. Containers and Kubernetes can make it easier to deploy, manage and migrate from cloud to cloud.
Kubernetes is now available from all major providers in various forms. The core is the same, and it is pretty easy to migrate from one platform to the other. And once into containers, you’ll find loads of prepared images and others that can be prepared for every need.
Multi-Cloud Storage
Storage, as always, is a little bit more complicated than compute. Data has gravity and, as such, is difficult to move; but there are a few tools that come in handy when you plan for multi-cloud.
Block storage is the easiest to move. It is usually smaller in size, and now there are several tools that can help protect, manage and migrate it — both at the application and infrastructure levels. There are plenty of solutions. In fact, almost every vendor now offers a virtual version of its storage appliances that run on the cloud, as well as other tools to facilitate the migration between clouds and on-premises infrastructures. Think about Pure Storage or NetApp, just to name a couple. It’s even easier at the application level. Going back to the NoSQL mentioned earlier, solutions like Rubrik DatosIO or Imanis Data can help with migrations and data management.
Files and objects stores are significantly bigger and, if you do not plan in advance, it could get a bit complicated (but is still feasible). Start by working with standard protocols and APIs. Those who choose S3 API for object storage needs will find it very easy to select a compatible storage system both on the cloud and for on-premises infrastructures. At the same time, many interesting products now allow you to access and move data transparently across several repositories (the list is getting longer by the day but, just to give you an idea, take a look at HammerSpace, Scality Zenko, RedHat Noobaa, and SwiftStack 1Space). I recently wrote a report for GigaOm about this topic and you can find more here.
The same goes for other solutions. Why would you stay with a single cloud storage backend when you can have multiple ones, get the best out of them, maintain control over data and manage it on a single overlaying platform that hides complexity and optimizes data placement through policies? Take a look at what Cohesity is doing to get an idea of what I’m saying here.
The Human Factor of Multi-Cloud
Regaining control of your infrastructure is good from the budget perspective and for the freedom of choice it provides in the long term. On the other hand, working more on the infrastructure side of things requires an investment in people and their skills. I’d put this as an advantage, but not everybody thinks this way.
In my personal opinion it is highly likely that a more skilled team will be able to make better choices, react quicker, and build optimized infrastructures which can give a positive impact to the competitiveness of the entire business but, on the other hand, if the organization is too small it is hard to find the right balance.
Closing the Circle
Amazon AWS, Microsoft Azure and Google Cloud are building formidable ecosystems and you can decide that it is ok for you to stick with only one of them. Perhaps your cloud bill is not that high and you can afford it anyway.
You can also decide that multi-cloud means multiple cloud silos, but that is a very bad strategy.
Alternatively, there are several options out there to build your Cloud 2.0 infrastructure and maintain control over the entire stack and data. True, it’s not the easiest path and neither the least expensive at the beginning, but it is the one that will probably pay off the most in the long term and will increase the agility and level of competitiveness of your infrastructure. This March, on the 26th, I will be co-hosting a GigaOm’s webinar sponsored by Wasabi on this topic, and there is an interview I recorded not too long ago with Zachary Smith (CEO of Packet) about new ways to think about cloud infrastructures. it is worth a listen if you are interested in knowing more about a different approach to cloud and multi-cloud.
Originally posted on Juku.it
0 notes