jbaquerot
jbaquerot
My Data Science and Big Data blog
2K posts
Would you like to find the needle in the haystack of the Big Data news? Follow me and I will keep you up to date on the most important news of Data Science and Big Data
Don't wanna be here? Send us removal request.
jbaquerot · 7 years ago
Link
23 notes · View notes
jbaquerot · 7 years ago
Link
21 notes · View notes
jbaquerot · 7 years ago
Link
8 notes · View notes
jbaquerot · 7 years ago
Link
18 notes · View notes
jbaquerot · 7 years ago
Link
It first introduces an example using Flask to set up an endpoint with Python, and then shows some of issues to work around when building a Keras endpoint for predictions with Flask.
6 notes · View notes
jbaquerot · 7 years ago
Link
New features of TensorFlow
7 notes · View notes
jbaquerot · 7 years ago
Link
Reinforcement Learning (RL) as a framework for computational neuroscience to model decision making process seems to be undervalued. Besides, there seems to be very little resources detailing how RL is applied in different industries. Despite the criticisms about RL’s weaknesses, RL should never be neglected in the space of corporate research given its huge potentials in assisting decision making
6 notes · View notes
jbaquerot · 7 years ago
Link
Kimberly-Clark is a Fortune 500 company. It's personal care product brands, including Huggies, Kleenex, and Scott, touch nearly 1 of every 4 people each day in 175 countries. Through Kimberly-Clark Professional the company offers products and solutions to create healthier, safer and more productive workplaces in a variety of industries including food services, healthcare, manufacturing, office buildings and more. As an industry leader, Kimberly-Clark is committed to driving digital innovations to improve operations and customer experiences in the fast-moving consumer goods category. Here are a few ways Kimberly-Clark is using big data, the Internet of Things (IoT) and artificial intelligence (AI) in their operations.
K-Challenge Brings Innovation to the Consumer Packaged Goods Industry
Kimberly-Clark sponsors the annual K-Challenge that invites entrepreneurs, start-ups and other inventors to develop innovations for the consumer packaged goods category via the Kimberly-Clark Digital Innovation Lab (D’Lab). There are six categories the competition focuses on including data and predictive analytics, cyber security, omnichannel shopper experiences, IoT/wearables/connected devices, supply chain/operations solutions and content and media experiences. This investment in technology innovation helps Kimberly-Clark adapt and apply some of today's best ideas into its operations.
Self-Service Retail Analytics
Kimberly-Clark generates a lot of data from several internal sources such as sales and marketing spend, external sources such as Nielsen, web applications, store performance and purchasing information. Until they adopted a platform powered by Tableau, Amazon Redshift, and Panoply, their complex data resided in inflexible systems from multiple data sources that made it difficult to use. By consolidating the data is now accessible to more professionals within the organization and they are saving time (eight hours weekly) and money ($250,000 over the course of two years) because they are spending less time collecting and sorting through the data and more time interpreting it. They are now able to access and use the data efficiently.
Internet of Things App for Facilities Managers
To help facilities managers monitor and manage the condition of restrooms remotely, Kimberly-Clark Professional introduced an Intelligent Restroom app. The state of a building's restrooms is critical in how tenants and customers perceive a building. An unhygienic bathroom can cause customers and tenants to have a lower opinion of the facility. Sensors on soap dispensers, air fresheners, entrance doors and more collect data that is then sent to the cloud-based app. Facilities managers can access the data from mobile or desktop to monitor the condition of the property's restrooms, and they don't have to be on-site. A pilot study of the Intelligent Restroom app showed the number of supplies used decreased by up to 20 percent when the app was deployed.
Simplifying a Complex Supply Chain with Data
The global supply chain Kimberly-Clark manages to produce its diverse line-up of products is massive and complex. They have adopted a more networked approach to supply chain since each party is typically involved in multiple stages of the process. Data-driven analytics throughout its supply chain - from planning, manufacturing, partner management and delivery - help Kimberly-Clark simplify and sort out the complexities inherent in their supply chain as well as find value throughout the process. Additionally, they understand that data is vital in helping them meet changing customer demands from transparency in the supply chain to customer expectations about the product, price, service, and quality. They also adopt a more open approach with suppliers and focus on co-innovation.
12 notes · View notes
jbaquerot · 7 years ago
Link
With the right process in place, it will not be difficult to find state-of-the-art hyperparameter configuration for a given prediction task. Out of the three approaches — manual, machine-assisted, and algorithmic — this article will focus on machine-assisted. The article will cover how I do it, get to the proof that the method works, and provide the understanding of why it works. The main principle is simplicity.
Few Words on Performance
The first point about performance relates to the issue of accuracy (and other more robust metrics) as a way to measure model performance. Consider f1 score as an example. If you have a binary prediction task with 1% positives, then a model that makes everything a 0 will get close to perfect f1 score and accuracy. This can be handled with some changes to the way f1 score deals with corner cases such as “all zeros,” “all ones,” and “no true positives.” But that’s a big topic, and outside of the scope of this article, so for now I just want to make it clear that this problem is a very important part of getting systemic hyperparameter optimization to work. We have a lot of research in this field, but the research is focused more on algorithms, and less on the fundamentals. Indeed, you can have the fanciest algorithm in the world — often also really complex — making decisions based on a metric that does not make sense. That’s not going to be hugely useful for dealing with “real-life” problems. Make no mistake; EVEN WHEN WE DO GET THE PERFORMANCE METRIC RIGHT (yes I’m yelling), we need to consider what happens in the process of optimizing a model. We have a training set, and then we have a validation set. As soon as we start to look at the validation results, and start making changes based on that, we start to create a bias towards the validation set. Now we end up with the training results that are a product of the bias the machine has, and we have the validation results, that is the product of the bias we have. In other words, the model we get as a result does not have the properties of a well-generalized model. Instead, it’s biased away from being generalized. So it would be very important to keep this point in mind.
The key point about a more advanced fully-automated (unsupervised) approach to hyperparameter optimization, involves first solving these two problems. Once these two are solved — and yes there are ways to do that — the resulting metrics would need to be implemented as a single score. Then that score becomes the metric against which the hyperparameter optimization process is optimized. Otherwise, no algorithm in the world will help, as it will optimize towards something else than what we are after. What are we after again? A model that will do the task that the prediction task articulates. Not just one model for one case (which is often the case in the papers covering the topic), but all kinds of models, for all kinds of prediction tasks. That is what a solution such as Keras allows us to do, and any attempt to automate parts of the process of using a tool such as Keras should embrace that idea.
What Tools Did I Use?
For everything in this article, I used Keras for the models, and Talos, which is a hyperparameter optimization solution I built. The benefit is that it exposes Keras as-is, without introducing any new syntax. It allows me to do in minutes what used to take days while having fun instead of painful repetition. You can try it for yourself:
pip install talos
Or look at the codes / docs here.
But the information I want to share, and the point I want to make, is not related to a tool, but the process. You could follow the same procedure any which way you like. One of the more prominent issues with automated hyperparameter optimization and related tools is that you generally tend to end up far away from the way you’re used to working. The key to successful prediction-task-agnostic hyperparameter optimization — as is with all complex problems — is in embracing cooperation between man and the machine. Every experiment is an opportunity to learn more about the practice (of deep learning) and the technology (in this case Keras). That opportunity should not be missed at the expense of process automation. At the same time, we should be able to take away the blatantly redundant parts of the process. Think of doing shift-enter in Jupyter for a few hundred times and waiting for a minute or two between each iteration. In summary, at this point, the goal should not be in a fully-automated approach to finding the right model, but in minimizing procedural redundancy on burdening the human. Instead of mechanically operating the machine, the machine operates itself. Instead of analyzing the results of various model configurations one by one, I want to analyze them by the thousands or by hundreds of thousands. There are over 80,000 seconds in a day, and a lot of parameter space can be covered in that time without me having to do anything about it.
12 notes · View notes
jbaquerot · 7 years ago
Link
Deep Learning Techniques
Here are a few ways you can improve your fit time and accuracy with pre-trained models:
Research the ideal pre-trained architecture: Learn about the benefits of transfer learning, or browse some powerful CNN architectures. Consider domains that may not seem like obvious fits, but share potential latent features.
Use a smaller learning rate: Since pre-trained weights are usually better than randomly initialized weights, modify more delicately! Your choice here depends on the learning landscape and how well the pre-training went, but check errors across epochs for an idea of how close you are to convergence.
Play with dropout: As with Ridge and LASSO regularization for regression models, there is no optimized alpha or dropout for all models. It’s a hyper-parameter that depends on your specific problem, and must be tested. Start with bigger changes — a wider gridsearch span across orders of magnitude, like np.logspace() can provide— then drop down as with the learning rate above.
Limit weight sizes: We can limit the max norm (absolute value) of the weights for certain layers in order to generalize our model
Don’t touch the first layers: The first hidden layers of a neural network tend to capture universal and interpretable features, like shapes, curves, or interactions that are very often relevant across domains. We should often leave these alone, and focus on optimizing the meta² latent level further back. This may mean adding hidden layers so we don’t rush the process!
Modify the output layer: Replace model defaults with a new activation function and output size that is appropriate for your domain. However, don’t limit yourself to the most obvious solution. While MNIST may seem like it wants 10 output classes, some numbers have common variations, and allowing for 12–16 classes may allow better settling of these variants and improved model performance! As with the tip above, deep learning models should be increasingly modified and tailored as we near output.
10 notes · View notes
jbaquerot · 7 years ago
Link
Technology moves extremely quickly. It seems like there is a new innovation produced nearly every day. Technology has changed the way business is done and has enabled many people to reach goals and try new things that they’ve never done before.
So which recent technological innovations are the most impressive? Eleven members of Forbes Technology Council shared their responses. The answers varied, but there is a consensus that high technology has transformed our lives and made many new businesses and achievements possible.
Practical Augmented Reality Augmented reality stands to be the most immediate tech development that will affect our lives in the near term. With microdisplays becoming reasonably priced, we will see personal wearables becoming available to the masses. This will provide us with a whole new avenue of information retrieval use on a daily, minute-by-minute basis. - George Heimel, Square360
Generative Adversarial Networks Generative adversarial networks (GANs) are a new type of neural network that is semi-supervised and enables companies to learn more from less labeled data. Let’s say you would like to identify customers who are likely to churn but only have labels of actual churn after X days for a handful of customers. GANs enable this and succeed where other fully supervised neural networks fail. - Anand Sampat, Datmo
Real-Time Language Translation Driven by voice recognition technology coupled with AI, real-time language translators allow single-language speakers to have real-time conversations. It’s impressive both on the usability front (earphones) and on the level of complexity insofar as getting both the language and accent right. - Michael Gurau, The Beacon Group
Chatbots I love the idea of chatbots and what they can do to take on some of the time-consuming work companies have while making businesses look good in terms of their customer experiences. Plus, chatbots are good at collecting information and analyzing it for further insights. It’s just a great way to cover tech support and service questions. - Muhammed Othman, Calendar
Artificial Intelligence In Mobile Apps A year ago, building AI into mobile apps would have been extremely difficult and costly. Today, it’s a lot less expensive, and we can incorporate Microsoft Cognitive Services, Google services, or Amazon services that use AI to make a mobile experience more intelligent, anticipate users’ desire and needs, and present information in context. Artificial intelligence, particularly in mobile devices, has transformed the user experience in the last few years as cognitive services have advanced exponentially. - Sanjay Malhotra, Clearbridge Mobile
Inexpensive, Fast Storage The cloud, AI, VR and other buzzwords are often considered the hottest tech trends. However, the rapidly declining cost and increasing performance of flash-based storage powers these trends. Without the modern, low-cost solid-state drive (SSD), VR and AI would still be inaccessible to most businesses. Cloud providers rely on SSDs. Innovation in the storage industry builds a foundation for new tech. - Jason Gill, Attracta
Deep Learning-Based Predictive Analytics The biggest technology innovation of the last three years just might be predictive analytics using AI-based deep learning. The ability of a computer to learn by just analyzing data without having to let the algorithm know what variables are important is unprecedented. This form of unsupervised learning is drastically changing the role of technology. - Carlos Melendez, Wovenware
Serverless Computing Today, developers have to worry not only about building their application but deploying and hosting it as well. It is a large portion of their workflow and requires them to commit resources in time and money up front. Serverless computing makes launching applications cheaper and faster by letting companies focus on the customer value without having to worry about deployment and scaling. The possibilities this will unlock are endless. - Nikhil Hasija, Azuqua
Brain-Computer Interfaces Brain scanners can translate your thoughts into textual words. Discussions about these devices have been around for some time; however, they could only map a handful of commands. Imagine if only by wearing a ball cap you could think, “How deep is that river?” and have the answer read back into an earpiece. The next step is to get the information back directly into your brain. - Jere Simpson, KITEWIRE/Steel-Talon
AI And Machine Learning Applications The most impressive piece of tech that has come out in recent years is the practical application of AI and machine learning. Whether it is modeling data, analyzing speech or driving a car, we are starting to see real-world applications of these technologies. It may have taken a few of decades, but the field is making good on the promises that were made back in the 1980s. – Chris Kirby, Voices.com
The Cloud No doubt that the cloud service is one of the great technologies that happened in the last three years. SMBs that need to concentrate on the business do not have the right knowledge or the teams in place to maintain their systems or are not willing to spend the money in that direction. The cloud offers a unique and flexible option for them to make sure they will be able to concentrate on business execution. - Ofer Laksman, Correlata Solutions
15 notes · View notes
jbaquerot · 7 years ago
Link
As Computer Vision represents a relative understanding of visual environments and their contexts, many scientists believe the field paves the way towards Artificial General Intelligence due to its cross-domain mastery. So what is Computer Vision? Here are a couple of formal textbook definitions: * “the construction of explicit, meaningful descriptions of physical objects from images” (Ballard & Brown, 1982) * “computing properties of the 3D world from one or more digital images” (Trucco & Verri, 1998) * “to make useful decisions about real physical objects and scenes based on sensed images” (Sockman & Shapiro, 2001)
Why study Computer Vision? The most obvious answer is that there’s a fast-growing collection of useful applications derived from this field of study. Here are just a handful of them: * Face recognition: Snapchat and Facebook use face-detection algorithms to apply filters and recognize you in pictures. * Image retrieval: Google Images uses content-based queries to search relevant images. The algorithms analyze the content in the query image and return results based on best-matched content. * Gaming and controls: A great commercial product in gaming that uses stereo vision is Microsoft Kinect. * Surveillance: Surveillance cameras are ubiquitous at public locations and are used to detect suspicious behaviors. * Biometrics: Fingerprint, iris and face matching remains some common methods in biometric identification. * Smart cars: Vision remains the main source of information to detect traffic signs and lights and other visual features.
I recently finished Stanford’s wonderful CS231n course on using Convolutional Neural Networks for visual recognition. Visual recognition tasks such as image classification, localization, and detection are key components of Computer vision. Recent developments in neural networks and deep learning approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. The course is a phenomenal resource that taught me the details of deep learning architectures being used in cutting-edge computer vision research. In this article, I want to share the 5 major computer vision techniques I’ve learned as well as major deep learning models and applications using each of them.
3 notes · View notes
jbaquerot · 7 years ago
Link
We as data scientists need to step-up our game and look for ways to mitigate emergent discrimination in our models. We need to make sure that our predictions do not disproportionately hurt people with certain sensitive characteristics (e.g., gender, ethnicity).
Luckily, last year's NIPS conference showed that the field is actively investigating how to bring fairness to predictive models. The number of papers published on the topic is rapidly increasing, a signal that fairness is finally being taken seriously. This point is also nicely made in the cartoon below, which was taken from the excellent CS 294: Fairness in Machine Learning course taught at UC Berkley.
Some approaches focus on interpretability and transparency by allowing deeper interrogation of complex, black box models. Other approaches, make trained models more robust and fair in their predictions by taking the route of constraining and changing the optimization objective. We will consider the latter approach and show how adversarial networks can bring fairness to our predictive models.
In this blog post, we will train a model for making income level predictions, analyse the fairness of its predictions and then show how adversarial training can be used to make it fair. The used approach is based on the 2017 NIPS paper "Learning to Pivot with Adversarial Networks" by Louppe et al.
Note that most of the code has been omitted, you can find the Jupyter notebook with all the code here.
9 notes · View notes
jbaquerot · 7 years ago
Link
Successful deep learning models often require significant amounts of computational resources, memory and power to train and run, which presents an obstacle if you want them to perform well on mobile and IoT devices. On-device machine learning allows you to run inference directly on the devices, with the benefits of data privacy and access everywhere, regardless of connectivity. On-device ML systems, such as MobileNets and ProjectionNets, address the resource bottlenecks on mobile devices by optimizing for model efficiency. But what if you wanted to train your own customized, on-device models for your personal mobile application?
Yesterday at Google I/O, we announced ML Kit to make machine learning accessible for all mobile developers. One of the core ML Kit capabilities that will be available soon is an automatic model compression service powered by “Learn2Compress” technology developed by our research team. Learn2Compress enables custom on-device deep learning models in TensorFlow Lite that run efficiently on mobile devices, without developers having to worry about optimizing for memory and speed. We are pleased to make Learn2Compress for image classification available soon through ML Kit. Learn2Compress will be initially available to a small number of developers, and will be offered more broadly in the coming months. You can sign up here if you are interested in using this feature for building your own models.
How it Works
Learn2Compress generalizes the learning framework introduced in previous works like ProjectionNet and incorporates several state-of-the-art techniques for compressing neural network models. It takes as input a large pre-trained TensorFlow model provided by the user, performs training and optimization and automatically generates ready-to-use on-device models that are smaller in size, more memory-efficient, more power-efficient and faster at inference with minimal loss in accuracy.
!Learn2Compress for automatically generating on-device ML models.
To do this, Learn2Compress uses multiple neural network optimization and compression techniques including: Pruning reduces model size by removing weights or operations that are least useful for predictions (e.g.low-scoring weights). This can be very effective especially for on-device models involving sparse inputs or outputs, which can be reduced up to 2x in size while retaining 97% of the original prediction quality. Quantization techniques are particularly effective when applied during training and can improve inference speed by reducing the number of bits used for model weights and activations. For example, using 8-bit fixed point representation instead of floats can speed up the model inference, reduce power and further reduce size by 4x. Joint training and distillation approaches follow a teacher-student learning strategy — we use a larger teacher network (in this case, user-provided TensorFlow model) to train a compact student network (on-device model) with minimal loss in accuracy.
3 notes · View notes
jbaquerot · 7 years ago
Link
Alexey Sapozhnikov, co-dounder and CTO of Tel Aviv, Israel-based prooV points out that while virtually every industry is embracing AI, it's the sectors that are stymied by well-worn processes and regulations — such as healthcare and government — that are likely to lag in AI adoption. “From the Food and Drug Administration’s stringent policies surrounding AI diagnosis software to developing complex proposals for government cybersecurity challenges, these processes can pose a huge stumbling block for organizations. Fortunately, many companies are realizing the importance of catching up to AI technology, lest they be left behind,” he said. So what industries are using AI, and what ones are likely to be disrupted by it. Here are 11 industries — in alphabetical order — that are experiencing disruption already.
Agriculture Jason Behrmann has worked as a communications strategist at two Montreal AI startups, one in business analytics (Enkidoo), the other in healthcare (Aifred Health). He said that industries that are suffering labor shortages are also likely to be heavily impacted by AI. “Most think that AI will be a job-killing disruptive technology, but for industries experiencing labor shortages, the automation and efficiency gains from AI will, in fact, strengthen these industries and preserve jobs in the long run," he said.
Behrmann says that one sector hit hard and suffering from labor shortages is agriculture. Few people want to work in this industry and recent populist backlashes against immigration have contracted the population of available farm workers in many industrial countries. This situation in Canada is a great example. “We estimate that Canada will suffer from a deficiency in 100,000 farm workers soon. Adopting AI and related automation technologies is a matter of survival for the agriculture industry,” he said.
Call Centers According to Cristian Rennella, CEO and co-founder of Colombia-based elMejorTrato.com, AI will replace the call center industry.
He said that after nine years working with an internal marketing team to answer their client's questions through live chat, the company started developing its own chatbot. “Thanks to Artificial Intelligence through deep learning with Google's TensorFlow platform, we were able to automate 56.9 percent of queries. In this way, the user receives their response in seconds and our team only has to answer those questions that were never consulted before,” he said. “We believe that in two years we will be able to replace our entire call center with AI.
Customer Experience Ryan Lester is director of customer engagement technologies at Boston-based LogMeIn. He said that customer experience is emerging as an early success story for AI across industries. While retail is the most prevalent sector leveraging AI today, others are also taking notice. Travel companies, for example, are seeing real value in leveraging chatbots to create always-on, personalized concierge level service at scale. From airlines and hotels to travel agencies, AI is helping mitigate frustration during challenging travel situations by understanding the context of the customer’s circumstance and providing contextually relevant options to resolve the issue.
Kimberly Nevala is director of business strategies for SAS best practices agrees. She said there are early adopters in all sectors — particularly by disruptors born digital. However, industries making significant inroads currently include those with high-touch customer service requirements or engagement such as finance and banking. A visible example is the proliferation of customer service chatbots for basic inquiries and common transactions.
Energy and Mining Oil and gas is one of the largest industrial segments and is a natural fit for AI, according to AJ Abdallat CEO of Glendale, Calif.-based Beyond Limits. Removing friction from port scheduling operations requires a rare form of machine intelligence called cognitive intelligence (or human-like reasoning). Cognitive AI is now being applied to track tankers to determine when they leave port, where they’re going, and how much petroleum or LNG they are transporting. Predicting what is being shipped, plus refinery destination and arrival times, will help traders make smarter decisions. This involves the fusion of the key cognitive capabilities of multi-agent scheduling with reactive recovery, asset management, rule compliance, diagnostics, and prognostics to ensure seamless autonomous operation.
The value an AI system can bring to the energy market is tremendous. When machine learning is applied to drilling, information from seismic vibrations, thermal gradients, strata permeability, pressure differentials, and more is collected. By analyzing this data, AI software can help geoscientists better assess variables, taking some of the guesswork out of equipment repair and failure, unplanned downtime, and even help determining potential locations of new wells. According to Abdallat, AI brings better predictive technology and efficiency to mining operations.
Healthcare Healthcare is a sector where AI has endless possibilities, according to Vineet Chaturvedi co-founder of Bengaluru, India-based Edureka. AI is currently used by healthcare innovators to predict diseases, identify high-risk patient groups, automate diagnostic tests and to increase speed and accuracy of treatment. It can also be used to improve drug formulations, predictive care, and DNA analysis that can positively impact quality of healthcare and affect human lives.
Intellectual Property Brisbane, Australia-based TrademarkVision is applying AI to help with issues around intellectual property in the image recognition space. In a statement to CMSWire, the company pointed out that the sheer volume of visual data available is a challenge for brands today, particularly in the area of design recognition and protection.
The company is using technology in 2D and 3D image recognition and artificial intelligence to provide visual search solutions. They don’t just scan objects for their likeness using data and codes, but through a combination of proprietary search algorithms and machine learning, the technology understands and thinks like humans, contextualizing and recognizing if one thing is visually like something else.
IT Service Management Marcel Shaw is AI evangelist, IT blogger and a federal systems engineer for Ivanti. He said that with AI technology making its way into corporate and government networks, like it or not, it is going to be a dominant solution for IT service management (ITSM) in the future. There simply aren’t enough resources for analysts to get personally involved with so many requests and incidents that are coming in. “We will see many organizations turn to chatbots with AI capabilities as a means to handle, for example, front line IT support calls. Further, although ITSM solutions are rapidly evolving, service management will never go away as long as IT exists. By implementing AI technology, IT service management will experience a disruptive change that will alter the way humans are involved with the service management process.”
Manufacturing Manufacturing — vehicle manufacturers in particular — is also using AI focus on automation and optimization. Industries dealing with complex knowledge requirements such as pharma and healthcare are vigorously planning and testing for the near future, although much talking and POC still abound currently.
Here, emerging applications focus on augmenting decision-making. For example, using AI to parse complex medical data and research to better inform the practitioner's diagnosis and treatment recommendations. As you might suspect, access to integrated data is a key enabler and barrier.
Technical Support Mark Brewer is global industry director for service management at IFS. He said that the service-sector will experience much of this AI adoption throughout 2018 — specifically the integration of AI-powered voice assistants. AI-powered voice assistants represent a second major opportunity for service organizations in 2018.
Many calls into a service helpdesk are uncomplicated queries, like establishing opening hours, or determining when an engineer is due to arrive, which means they are simple enough to be answered by a bot. This drives significant potential for companies to connect AI-powered voice assistants behind the scenes to enterprise software with capabilities such as self-service diagnostics or scheduling optimization engines, to automatically offer appointment slots.
Retail Implementing chatbots will enable retailers to dramatically increase the amount of data they can collect about the customer, giving them a competitive advantage over those who do not implement chatbots. When customers use verbal requests to navigate websites and to make purchases, Chatbots will be able to capture audible reactions, improve conversation capabilities, and over time, provide analytics to the retailer that can be associated with the emotions and the mood of their customers while online.
As a result, analytics will provide retailers enough data to predict emotional responses to their customer’s online experience. Thus, enabling retailers with the ability to tailor and personalize the customer experience with a focus on making the customer happy, which increases the chances that the customer will return in the future.
Software Development For Paulo Rosado, CEO of OutSystems, AI, has the potential to transform the entire software lifecycle where AI assistants help with everything from modeling new applications with the right architecture and user experiences to analyzing the business value and impact for the organization. A combination of AI technologies like advanced machine learning, deep learning, and natural language processing, and business rules will have an impact on all steps of the software development life cycle, helping developers build better software faster.”
1 note · View note
jbaquerot · 7 years ago
Link
Optimizing logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently richer.
Tech giants such as Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Stephen Hawking and Elon Musk – believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways, this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?
1. Unemployment. What happens after the end of jobs?
The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.
Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.
This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.
If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.
2. Inequality. How do we distribute the wealth created by machines?
Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.
We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley ... only in Silicon Valley there were 10 times fewer employees.
If we’re truly imagining a post-work society, how do we structure a fair post-labour economy?
3. Humanity. How do machines affect our behaviour and interaction?
Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.
This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.
Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.
On the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions. When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior. However, in the wrong hands it could prove detrimental.
4. Artificial stupidity. How can we guard against mistakes?
Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they "learn" to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.
Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn't be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.
5. Racist robots. How do we eliminate AI bias?
Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.
6. Security. How do we keep AI safe from adversaries?
The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won't be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.
7. Evil genies. How do we protect against unintended consequences?
It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us? This doesn't mean by turning "evil" in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a "genie in a bottle" that can fulfill wishes, but with terrible unforeseen consequences.
In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of "no more cancer" very efficiently, but not in the way humans intended it.
8. Singularity. How do we stay in control of a complex intelligent system?
The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.
This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can't rely on just "pulling the plug" either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.
9. Robot rights. How do we define the humane treatment of AI?
While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.
Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What's more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful "survive" and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?
Once we consider machines as entities that can perceive, feel and act, it's not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of "feeling" machines?
Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us.
2 notes · View notes
jbaquerot · 7 years ago
Link
BlueData has developed a prototype running its big data platform to launch clusters using the Kubernetes container orchestrator. The move is another step in bridging the gap between the stateless world of Kubernetes and stateful needs of big data.
Tom Phelan, co-founder and chief architect at BlueData, said the prototype is using its EPIC (Elastic Private Instant Clusters) big data platform running on Kubernetes. The controller is deployed as a stateful pod with its own public IP address. Customers can then manage the cloud-based cluster in the same manner in which they manage bare metal servers. This allows for the launching of big data clusters using Kubernetes.
BlueData is targeting the move at Fortune 1000 companies that have been challenged in managing big data analysis. Phelan explained that these firms are attempting to optimize hardware usage and connect to numerous data lakes while controlling security risks. These include firms in the financial, legal, medical, insurance, and government sectors.
BlueData provides a big data software platform that uses embedded Docker containers to deliver big-data-as-a-service for its customers.
Many of BlueData’s customers today are using bare metal servers or virtual machines (VMs) to support their big data needs. However, Phelan explained that customers are looking to streamline finances and operations around the open source container orchestrator.
“These customers are still for the most part just dabbling with Kubernetes, but they are very interested in going in that direction and want to know if there are ways to manage their big data needs as well,” Phelan said. “That’s what we are trying to show.”
Kubernetes Challenges
Kubernetes has elbowed its way to the top as the enterprise choice for container management. While challenges still exist, most analysts and vendors have noted that Kubernetes has become an important component for enterprises looking to maximize their cloud deployments.
However, Kubernetes is designed primarily for stateless applications. This means that it was not created to handle data storage. This has led to a robust business of storage vendors developing stateful appendages that can plug into a Kubernetes-managed container deployment to handle storage needs.
BlueData is part of that development, though focused on larger data needs. Phelan made a point to note that BlueData is not a storage provider, and instead is an infrastructure platform for handling the automation and lifecycles of data storage needs.
“Big data is very stateful. It’s not microservices or cloud native,” Phelan explained. “Big data is monolithic and uses a lot of local storage resources. We definitely have our work cut out for us.”
The Kubernetes community has begun to more formally address data storage needs. Some of the more recent platform updates have begun to identify the concept of stateful, which Phelan said is helping the process.
But, those efforts fail to take into account pressing security issues. This requires, among other items, a consistent IP address. BlueData works to automate the configuration of the software running on the containers to handle the security and data storage needs.
As for the running prototype, Phelan said that once the firm is comfortable with stability it expects to release a commercial version. That is expected to happen over the next 12 months.
“What we have seen so far looks pretty good, but we are still running tests to make sure its ready for our customers,” Phelan said.
0 notes