#and we should chase less resource-intensive software
Explore tagged Tumblr posts
Text
If we stopped developing more powerful consumer computer hardware today, I think we'd be fine. We don't need more powerful graphics than we have now. We don't need faster processors than we have now (except for scientific purposes). I wouldn't mind if we just focused on making more efficient CPU's and improving the efficiency of our software (looking at you Microsoft).
#computers#technology#this post is inspired by discussion of how expensive it would be to manufacture an iPhone in the US#like#we can just stop chasing more powerful hardware#we should chase longevity#we should chase efficiency#and we should chase less resource-intensive software
5 notes
·
View notes
Text
EVERY FOUNDER SHOULD KNOW ABOUT STOCK
Most will say that any ideas you think of while you're employed by the company belong to them. To make all this happen, you're going to have to pay for the servers that the software runs on, and the existing players will only have the advantages any big company has in its market. So innovation happens at hacker speeds instead of big company speeds. So who should start a startup now, because the light is better there. To make all this happen, you're going to make money. What surprised me the most is that everything was actually fairly predictable! But that doesn't sound right either.1 Much of what's most novel about YC is due to Jessica Livingston.
In almost every domain there are advantages to seeming good. I don't see why it ought to be the surprises, the things I didn't tell people. What we couldn't stand were people with a lot of Lisp's unpopularity is simply due to having an unfamiliar syntax. Ron Conway. I wish we had. Places that aren't startup hubs are toxic to startups. If we've learned one thing from funding so many startups, it's pretty clear how big a role luck plays and how much stock they each have. If they did, it would be hard to start a new channel. Many languages especially the ones designed for other people to do such things for him, leaving all his time free for math.
We both had roughly zero assets. Founders of successful startups talked less about choosing cofounders and more about how hard it must be to start a new channel. If you're starting a restaurant, maybe, but not unfair.2 The other reason to spend money slowly is to encourage a culture of cheapness. After a while, if you don't, you're in the crosshairs of whoever does.3 They're like dealers; they sell the stuff, but they are an important fraction, because they don't have sufficient flexibility to adapt to them. Talk to as many VCs as you can. He makes a dollar only when someone on the other end of the spectrum is designing chairs. And now Wall Street is collectively kicking itself. But I think in most businesses the advantages of young founders. It was also the mom in another sense: she had the last word.
That's why the Internet won. I'm a writer, and writers always get disproportionate attention. And yet a lot of founders are surprised is that because they work fast, they expect everyone else to. People just don't seem to get how different it is till they do it. Instead of delivering what viewers want, they're trying to avoid. But you have to go find individual people who are not like you want from technology? But this is just the kind that tends to be simply This sucks.
Good does not mean being a pushover. It's harder to hide wrongdoing now. If you try something that blows up and leaves you broke at 26, big deal; a lot of startups have that form: someone comes along and makes something for a tenth or a hundredth of what it used to cost, and the best thing they can do is jump in immediately.4 The evolution of technology is one of the most exciting new applications that get written in the language, and the two were very separate. The evolution of technology is one of the most surprising things I saw was the willingness of people to help us.5 The solution to this puzzle is to realize that economic inequality should be decreased? Many students feel they should wait and get a job or being a student, because it never stops. The principle extends even into programming. That is wildly oversimplified, of course: insurance, business license, unemployment compensation, various things with the IRS. For example, if you're into that sort of thing. That depends.
But really what work experience refers to is not some specific expertise, but the elimination of the flake reflex—the ability to get things done, with no excuses. Our existing investors, knowing that we needed money and had nowhere else to get it, you can use any language you want, really in the blink of an eye.6 Brevity is underestimated and even scorned. That makes him seem like a judge. Number two, make the most of the time, were worth several million dollars. Most rich people are looking for good investments. A few days ago I realized something surprising: the situation with time is much the same way that all you have to understand it, and c spends countless hours in front of them and refine it based on their reactions. And the fact that most good startup ideas seem bad: If you spend all your time programming, you will fail. We may have democracy, or we may have wealth concentrated in the hands of a few, but we couldn't figure out how to give them what they want when they want it, and that means that investor starts to lose deals. They should both just face the fact that Jessica and I were 29 and 30 respectively when we started YC.7 It was a sign of an underlying lack of resourcefulness. Someone has an idea for something; they build it; and in Berkeley immediately north or south of campus.
Jessica and I were 29 and 30 respectively when we started YC.8 Your final advantage, ignorance, may not sound very useful.9 It seemed odd that the outliers at the two ends of the spectrum is designing chairs. But they underestimated the force of their desire to connect with one another.10 Far from it. It's completely pervasive. But the more you realize you can do a lot more play in it.
Are you the right sort of founder a one line intro to a VC, and he'll chase down the implications of what one said to them. All other things being equal, they should get a good grade. Why are founders fooled by this? Another is to work harder. They traversed idea space as gingerly as a very old person traverses the physical world. It's a far more intense relationship than you usually see between coworkers—partly because the stresses are so much greater, and partly because at first the founders are, and that explains most of the surprises.11 They think they're going to be that smart. The woman in charge of sales was so tenacious that I used to want to add but our main competitor, whose ass we regularly kick, has a lot of people with technical backgrounds.
Notes
27 with the other cheek skirts the issue; the crowds of shoppers drifting through this huge mall reminded George Romero of zombies. The way to create one of the most abstract ideas, because the ordering system was small.
What people usually mean when they set up an additional page to deal with the New Deal but with World War II to the ideal of a reactor: the source files of all tend to get a patent is conveniently just longer than the rich have better opportunities for education. But that doesn't seem an impossible hope. Or a phone, and Cooley Godward.
Creative Destruction Whips through Corporate America. Algorithms that use it are called naive Bayesian. And while we were using Lisp, you have to be good. When I was once trying to steal the company is always raising money, the number at Harvard Business School at the company's PR people worked hard to prevent shoplifting because in their target market the shoplifters are also several you can't distinguish between selecting a link and following it; all you'd need to be a good way to make money; and not others, and only one founder take fundraising meetings is that so many others the pattern for the more thoughtful people start to spread them.
Which means the startup will be near-spams that have economic inequality as a kid who had to push to being a doctor. There were several other reasons, avoid the topic. One of the density of startup: one kind that's called into being to commercialize a scientific discovery.
Y Combinator makes founders move for 3 months also suggests one underestimates how hard it is unfair when someone gets drunk instead of reacting. Parker, William R. But while such trajectories may be underestimating VCs. By all means crack down on these.
If it's 90%, you'd ultimately be hurting yourself, if you have to spend a lot of investors. If this happens because they're determined to fight. Any expected value calculation varies from person to run a mile in under 4 minutes. This explains why such paintings are slightly worse.
If you're trying to work not just the location of the things they've tried on the aspect they see you at a regularly increasing rate to manufacture a perfect growth curve, etc. An hour old is not to have been the first question is only half a religious one; there is at fault, since that was a company becomes big enough, it was spontaneous. If anyone wanted to than because they are at some of those things that's not relevant to an audience of investors want to keep tweaking their algorithm to get into that because server-based applications greatly to be about 50%. So the most part and you have a connection to one of his professors did in salary.
Perhaps this is why so many people mistakenly think it was true that the meaning of the present that most people than subsequent millions. But you couldn't slow the latter case, 20th century was also obvious to us.
94 says a 1952 study of rhetoric was inherited directly from Rome. I said yes.
The shares set aside an option to maintain your target growth rate as evolutionary pressure is such a valuable technique that any idea relating to the margin for error.
Some urban renewal experts took a shot at destroying Boston's in the Neolithic period. Others will say this amounts to the Bureau of Labor. So how do they learn that nobody wants what they meant. I get attacked a lot of detail.
#automatically generated text#Markov chains#Paul Graham#Python#Patrick Mooney#person#happen#margin#charge#time#mom#PR#Creative#curve#shoppers#lot#audience#south#sup
2 notes
·
View notes
Text
Investing in frontier technology is (and isn’t) cleantech all over again
Investing in frontier technology is (and isn’t) cleantech all over again
Shahin Farshchi Contributor
Shahin Farshchi is a partner at Lux Capital.
More posts by this contributor
The dos and don’ts of crafting frontier-tech companies
Five billion-dollar businesses for the driverless future
I entered the world of venture investing a dozen years ago. Little did I know that I was embarking on a journey to master the art of balancing contradictions: building up experience and pattern recognition to identify outliers, emphasizing what’s possible over what’s actual, generating comfort and consensus around a maverick founder with a non-consensus view, seeking the comfort of proof points in startups that are still very early, and most importantly, knowing that no single lesson learned can ever be applied directly in the future as every future scenario will certainly be different.
I was fortunate to start my venture career at a fund specializing in funding “Frontier” technology companies. Real-estate was white hot, banks were practically giving away money, and VCs were hungry to fund hot startups.
I quickly found myself in the same room as mainstream software investors looking for what’s coming after search, social, ad-tech, and enterprise software. Cleantech was very compelling: an opportunity to make money while saving our planet. Unfortunately for most, neither happened: they lost their money and did little to save the planet.
Fast forward a decade, after investors scored their wins in online lending, cloud storage, and on-demand, I find myself, again, in the same room with consumer and cloud investors venturing into “Frontier Tech”. The are dazzled by the founders’ presentations, and proud to have a role in funding turning the seemingly impossible to what’s possible through science. However, what lessons did they take away from the Cleantech cycle? What should Frontier Tech founders and investors be thinking about to avoid the same fate?
Coming from a predominantly academic background, I was excited to be part of the emerging trend of funding founders leveraging technology to make how we generate, move, and consume our natural resources more efficient and sustainable. I was thrilled to be digging into technologies underpinning new batteries, photovoltaics, wind turbines, superconductors, and power electronics.
To prove out their business models, these companies needed to build out factories, supply chains, and distribution channels. It wasn’t long until the core technology development became a small piece of an otherwise complex, expensive operation. The hot energy startup factory started to look and feel mysteriously like a magnetic hard drive factory down the street. Wait a minute, that’s because much of the equipment and staff did come from factories making components for PCs; but this time they were making products for generating, storing, and moving energy more renewably. So what went wrong?
Whether it was solar, wind, or batteries, the metrics were pretty similar: dollars per megawatt, mass per megawatt, or multiplying by time to get dollars and mass per unit energy, whether it was for the factories or the systems. Energy is pretty abundant, so the race was on to to produce and handle a commodity. Getting started as a real competitive business meant going BIG: as many of the metrics above depended on size and scale. Hundreds of millions of dollars of venture money only went so far.
The onus was on banks, private equity, engineering firms, and other entities that do not take technology risk, to take a leap of faith to take a product or factory from 1/10th scale to full-scale. The rest is history: most cleantech startups hit a funding valley of death. They need to raise big money while sitting at high valuations, without a kernel of a real business to attract investors that write those big checks to scale up businesses.
How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity…
Frontier Tech, like Cleantech, can be capital-intense. Whether its satellite communications, driverless cars, AI chips, or quantum computing; like Cleantech, there is relatively larger amounts of capital needed to take the startups the point where they can demonstrate the kernel of a competitive business. In other words, they typically need at least tens of millions of dollars to show they can sell something and profitably scale that business into a big market. Some money is dedicated to technology development, but, like cleantech a disproportionate amount will go into building up an operation to support the business. Here are a couple examples:
Satellite communications: It takes a few million dollars to demonstrate a new radio and spacecraft. It takes tens of millions of dollars to produce the satellites, put them into orbit, build up ground station infrastructure, the software, systems, and operations needed to serve fickle, enterprise customers. All of this while facing competition from incumbent or in-house efforts. At what point will the economics of the business attract a conventional growth investor to fund expansion? If Cleantech taught us anything, it’s that the big money would prefer to watch from the sidelines for longer than you’d think.
Quantum compute: Moore’s law is improving new computers at a breakneck pace, but the way they get implemented as pretty incremental. Basic compute architectures date back to the dawn of computing, and new devices can take decades to find their way into servers. For example, NAND Flash technology dates back to the 80s, found its way into devices in the 90s, and has been slowly penetrating datacenters in the past decade. Same goes for GPUs; even with all the hype around AI. Quantum compute companies can offer a service direct to users, i.e., homomorphic computing, advanced encryption/decryption, or molecular simulations. However, that would one of the rare occasions where novel computing machine company has offered computing as opposed to just selling machines. If I had to guess; building the quantum computers will be relatively quick; building the business will be expensive.
Operating systems for driverless cars: Tremendous progress has been made since Google first presented its early work in 2011. Dozens of companies are building software that do some combination of perception, prediction, planning, mapping, and simulations. Every operator of autonomous cars, whether they are vertical like Zoox, or working in partnerships like GM/Cruise, have their own proprietary technology stacks. Unlike building an iPhone app, where the tools are abundant and the platform is well-understood, integrating a complete software module into an autonomous driving system may take up more effort than putting together the original code in the first place.
How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity: it’s easier to build a Frontier-tech company that doesn’t need to raise big dollars before demonstrating the kernel of an interesting business. On rare occasions, if the Frontier tech startup is a pioneer in its field, then it can be acquired for top dollar for the quality of its results and its team.
Recent examples are Salesforce’s acquisition of Metamind, GM’s acquisition of Cruise, and Intel’s acquisition of Nervana (a Lux investment). However, as more competing companies get to work on a new technology, the sense of urgency to acquire rapidly diminishes as the scarce, emerging technology quickly becomes widely available: there are now scores of AI, autonomous car, and AI chip companies out there. Furthermore, as technology becomes more complex, its cost of integration into a product (think about the driverless car example above) also skyrockets. Knowing this likely liability, acquirers will tend to pay less.
Creative founding teams will find ways to incrementally build interesting businesses as they are building up their technologies.
I encourage founders, and investors to emphasize the businesses they are building through their inventions. I encourage founders to rethink plans that require tens of millions of dollars before being able to sell products, while warning founders not to chase revenue for the sake of revenue.
I suggest they look closely at their plans and find creative ways to start penetrating, or building exciting markets, hence interesting businesses, with modest amounts of capital. I advise them to work with investors who, regardless of whether they saw how Cleantech unfolded, are convinced that their $$ can take the company to the point where it can engage customers with an interesting product with a sense for how it can scale into an attractive business.
0 notes
Text
A Short Introduction To Blockchain - For Normal Individuals
pump and dump telegram groups Crypto-what? If you've tried to dive into this mysterious matter called blockchain, you would be forgiven for recoiling in horror at the sheer opaqueness of the technical jargon that is typically utilised to frame it. So in advance of we get into what a crytpocurrency is and how blockchain technology could change the globe, let's explore what blockchain really is. In the most straightforward phrases, a blockchain is a digital ledger of transactions, not contrary to the ledgers we have been making use of for hundreds of a long time to history income and buys. The function of this electronic ledger is, in reality, quite significantly similar to a standard ledger in that it data debits and credits between persons. That is the core idea powering blockchain the variance is who holds the ledger and who verifies the transactions. With traditional transactions, a payment from one person to one more consists of some type of intermediary to aid the transaction. Let us say Rob desires to transfer £20 to Melanie. He can possibly give her dollars in the variety of a £20 notice, or he can use some type of banking app to transfer the cash specifically to her bank account. In both equally instances, a lender is the intermediary verifying the transaction: Rob's resources are confirmed when he normally takes the cash out of a cash device, or they are confirmed by the application when he can make the electronic transfer. The bank decides if the transaction should go forward. The bank also holds the history of all transactions manufactured by Rob, and is only accountable for updating it each time Rob pays somebody or gets funds into his account. In other terms, the financial institution retains and controls the ledger, and everything flows by way of the lender. That's a lot of accountability, so it truly is important that Rob feels he can rely on his lender normally he would not chance his income with them. He needs to really feel self-assured that the lender will not defraud him, will not drop his income, will not be robbed, and will not disappear overnight. This need to have for have faith in has underpinned fairly much every main behaviour and side of the monolithic finance industry, to the extent that even when it was found that banks had been becoming irresponsible with our dollars through the money disaster of 2008, the authorities (a different middleman) selected to bail them out rather than threat destroying the final fragments of believe in by allowing them collapse. Blockchains work in another way in just one key respect: they are solely decentralised. There is no central clearing property like a financial institution, and there is no central ledger held by just one entity. Instead, the ledger is distributed across a vast network of computer systems, called nodes, each of which retains a copy of the total ledger on their respective difficult drives. These nodes are related to 1 one more via a piece of software package called a peer-to-peer (P2P) client, which synchronises facts throughout the network of nodes and makes absolutely sure that every person has the similar model of the ledger at any presented position in time. When a new transaction is entered into a blockchain, it is initially encrypted using condition-of-the-art cryptographic know-how. After encrypted, the transaction is transformed to a thing known as a block, which is essentially the term used for an encrypted group of new transactions. That block is then despatched (or broadcast) into the community of computer nodes, in which it is verified by the nodes and, as soon as verified, passed on through the network so that the block can be additional to the finish of the ledger on everybody's laptop, less than the listing of all previous blocks. This is called the chain, for this reason the tech is referred to as a blockchain. The moment authorized and recorded into the ledger, the transaction can be concluded. This is how cryptocurrencies like Bitcoin function. Accountability and the removing of have confidence in What are the strengths of this method about a banking or central clearing method? Why would Rob use Bitcoin as a substitute of regular forex? The solution is rely on. As mentioned just before, with the banking process it is vital that Rob trusts his lender to defend his money and manage it correctly. To make certain this transpires, huge regulatory methods exist to validate the steps of the banking companies and make sure they are suit for function. Governments then control the regulators, creating a form of tiered technique of checks whose sole objective is to enable prevent blunders and negative conduct. In other words and phrases, organisations like the Financial Services Authority exist exactly due to the fact financial institutions can't be trusted on their individual. And banking companies commonly make errors and misbehave, as we have noticed as well quite a few times. When you have a one source of authority, energy tends to get abused or misused. The believe in romance between people and banking companies is uncomfortable and precarious: we do not seriously believe in them but we will not feel there is significantly option. Blockchain techniques, on the other hand, don't need to have you to have confidence in them at all. All transactions (or blocks) in a blockchain are confirmed by the nodes in the network in advance of getting extra to the ledger, which implies there is no solitary point of failure and no single approval channel. If a hacker wished to productively tamper with the ledger on a blockchain, they would have to concurrently hack tens of millions of pcs, which is nearly impossible. A hacker would also be pretty substantially not able to provide a blockchain community down, as, all over again, they would need to be in a position to shut down each solitary laptop in a network of computer systems distributed about the world. The encryption approach itself is also a important factor. Blockchains like the Bitcoin just one use deliberately hard procedures for their verification procedure. In the scenario of Bitcoin, blocks are verified by nodes carrying out a deliberately processor- and time-intense sequence of calculations, generally in the variety of puzzles or sophisticated mathematical issues, which mean that verification is neither immediate nor obtainable. Nodes that do dedicate the resource to verification of blocks are rewarded with a transaction payment and a bounty of newly-minted Bitcoins. This has the perform of each incentivising folks to turn into nodes (mainly because processing blocks like this involves fairly effective desktops and a whole lot of electricity), although also managing the process of producing - or minting - units of the forex. This is referred to as mining, mainly because it entails a sizeable sum of hard work (by a computer system, in this case) to make a new commodity. It also signifies that transactions are verified by the most independent way doable, additional independent than a governing administration-regulated organisation like the FSA. This decentralised, democratic and very safe nature of blockchains implies that they can perform with no the need for regulation (they are self-regulating), authorities or other opaque middleman. They get the job done since individuals don't have faith in every other, fairly than in spite of. Permit the importance of that sink in for a whilst and the exhilaration close to blockchain starts off to make sense. Smart contracts Where factors get actually intriguing is the purposes of blockchain past cryptocurrencies like Bitcoin. Offered that one particular of the underlying principles of the blockchain method is the safe, unbiased verification of a transaction, it's uncomplicated to imagine other approaches in which this sort of process can be worthwhile. Unsurprisingly, numerous these kinds of purposes are previously in use or progress. Some of the very best ones are: Sensible contracts (Ethereum): most likely the most thrilling blockchain improvement soon after Bitcoin, sensible contracts are blocks that include code that have to be executed in get for the contract to be fulfilled. The code can be anything at all, as extended as a laptop can execute it, but in simple phrases it indicates that you can use blockchain technology (with its impartial verification, trustless architecture and safety) to generate a kind of escrow technique for any form of transaction. As an example, if you happen to be a web designer you could generate a agreement that verifies if a new client's web page is introduced or not, and then mechanically release the funds to you once it is. No much more chasing or invoicing. Clever contracts are also staying employed to establish ownership of an asset this sort of as assets or art. The prospective for lowering fraud with this approach is enormous. Cloud storage (Storj): cloud computing has revolutionised the internet and brought about the arrival of Huge Knowledge which has, in turn, kick commenced the new AI revolution. But most cloud-based mostly programs are run on servers stored in solitary-site server farms, owned by a one entity (Amazon, Rackspace, Google and so forth). This offers all the similar problems as the banking program, in that you knowledge is controlled by a single, opaque organisation which signifies a single stage of failure. Distributing facts on a blockchain removes the have faith in situation totally and also promises to improve dependability as it is so a lot more durable to get a blockchain community down. Electronic identification (ShoCard): two of the biggest concerns of our time are discover theft and information defense. With wide centralised providers these as Facebook holding so considerably knowledge about us, and initiatives by several developed-globe governments to store digital data about their citizens in a central database, the potential for abuse of our personalized info is terrifying. Blockchain technology provides a prospective solution to this by wrapping your key data up into an encrypted block that can be verified by the blockchain network anytime you need to have to show your identity. The applications of this assortment from the clear substitute of passports and I.D. playing cards to other parts this kind of as changing passwords. It could be large. Report Source: http://EzineArticles.com/9690855
0 notes
Link
The right AI solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
The promises of AI are great, but taking the steps to build and implement AI within an organization is challenging. As companies learn to build intelligent products in real production environments, engineering teams face the complexity of the machine learning development process—from data sourcing and cleaning to feature engineering, modeling, training, deployment, and production infrastructure. Core to addressing these challenges is building an effective AI platform strategy—just as Facebook did with FBLearner Flow and Uber did with Michelangelo. Often, this task is easier said than done. Navigating the process of building a platform bears complexities of its own, particularly since the definition of “platform” is broad and inconclusive. In this post, I'll walk through the key considerations of building an AI platform that is right for your business, and avoiding common pitfalls.
Who will use the platform?
Machine learning platforms are often casually advertised as designed for both software engineers and data scientists. Most of them, however, fail to address both roles well and at the same time. Even worse, they don’t offer enough value to either side to be useful for real work. My experience in building PredictionIO and contributing to Salesforce Einstein AI has helped me understand two distinct groups of practitioners who have diverged sets of requirements in mind.
First, there is the data scientist group. These users usually have a math and statistics background, and are heavy users of tools like R and Python's scientific packages and data visualization tools. This group is responsible for analyzing data and tuning models for the best accuracy, so they’re concerned about whether the platform supports a specific class of algorithms, that it works well with data analysis tools they are already familiar with, and that it integrates with the visualization tools they use. They also want to know what feature engineering techniques it supports, whether they can bring in their own pre-trained models, and so on.
Then there is the software developer group. These users are generally familiar with building web and mobile applications, and are more concerned with whether the platform integrates with the desired data sources, and if the provided programming interfaces and built-in algorithms are sufficient to build certain applications. They want to know how to retrieve model results, whether model versioning is supported, if there is a software design pattern to be followed, and so on.
To implement an AI platform successfully for your organization, you must truly understand your users and do the right heavy lifting accordingly. For example, there are many data scientists who prefer to fine-tune every algorithm parameter manually, but if your users expect out-of-the-box regression that just works, then automated model tuning may become an essential technology of the platform. You want to help these users avoid the hassle of tuning the regularization parameters so they can focus on their top priorities.
Are you solving for simplicity or flexibility?
You may wonder why it is so difficult to build a single AI platform that serves two or more personas well. Can’t we simply offer more functionality on the platform? The problem boils down to the tough choice between simplicity and flexibility. It is more of an art than a science to determine which parts should be abstracted away for simplicity and which parts should be made customizable for flexibility.
For some users, an ideal platform is one that abstracts away all data science details. Many software engineers happily utilize the power of Salesforce Einstein's deep learning APIs to recognize objects in images and to classify sentiment in text without worrying about how the AI model is built, or even which algorithm is being used behind the scenes.
For other users, an ideal platform is one that allows a maximum level of flexibility. Many software engineers like to build completely custom AI engines on Apache PredictionIO. They get their hands dirty modifying the Spark ML pipelines and enjoy the freedom to tailor-make and fine-tune every component— from data preparation and model selection to real-time serving logics— in order to create a unique AI use case.
How do you balance product and engineering decisions?
As an AI platform is adopted by more and more users, many tough but interesting product and engineering decisions are revealed. What should the platform measure? Should it offer built-in metrics? How should it handle the very different requirements for AI R&D, development and production purposes? What’s the cost versus scalability strategy? Should the platform be cloud agnostic? Should visualization tools be part of the platform? To answer these questions most effectively, you must focus on one complete use case for one type of user at a time, starting with the highest priorities for the business.
What Is your multi-layer approach?
Sometimes, the reality is that you do need to construct AI offerings for various types of users. In that case, the separation of offerings must be explicit.
For instance, the Salesforce Einstein artificial intelligence layer has three main components. First, there are several independent services relating to machine learning development. One service is for executing resource-intensive jobs, and is responsible for intelligently allocating and managing distributed computing units to each job. Another service is for scheduling jobs, managing their dependencies, and monitoring the status. These low-level services give data scientists and software engineers the maximum flexibility to build AI solutions in whatever ways they like.
Second, there is an application framework that standardizes the design pattern for some types of common AI applications— specifically in Salesforce’s case, multitenant AI applications. Users will still need to write code, but they write far less of it because many common functionalities are abstracted away. In exchange for some flexibilities, the platform offers resilience and scalability to the AI applications built on top of it.
Third, APIs and user interfaces are provided so that the platform can be useful to users who write very little code, or even no code, to build AI applications.
Conclusion
Companies that do not thoroughly think through their AI strategies often swing from one direction to another one. They are chasing the wind. The growing demand for AI platforms to serve various types of development is inevitable in the foreseeable future, and the right solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
Continue reading Key considerations for building an AI platform.
from All - O'Reilly Media http://ift.tt/2z66lEj
0 notes
Text
Key considerations for building an AI platform
Key considerations for building an AI platform
The right AI solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
The promises of AI are great, but taking the steps to build and implement AI within an organization is challenging. As companies learn to build intelligent products in real production environments, engineering teams face the complexity of the machine learning development process—from data sourcing and cleaning to feature engineering, modeling, training, deployment, and production infrastructure. Core to addressing these challenges is building an effective AI platform strategy—just as Facebook did with FBLearner Flow and Uber did with Michelangelo. Often, this task is easier said than done. Navigating the process of building a platform bears complexities of its own, particularly since the definition of “platform” is broad and inconclusive. In this post, I'll walk through the key considerations of building an AI platform that is right for your business, and avoiding common pitfalls.
Who will use the platform?
Machine learning platforms are often casually advertised as designed for both software engineers and data scientists. Most of them, however, fail to address both roles well and at the same time. Even worse, they don’t offer enough value to either side to be useful for real work. My experience in building PredictionIO and contributing to Salesforce Einstein AI has helped me understand two distinct groups of practitioners who have diverged sets of requirements in mind.
First, there is the data scientist group. These users usually have a math and statistics background, and are heavy users of tools like R and Python's scientific packages and data visualization tools. This group is responsible for analyzing data and tuning models for the best accuracy, so they’re concerned about whether the platform supports a specific class of algorithms, that it works well with data analysis tools they are already familiar with, and that it integrates with the visualization tools they use. They also want to know what feature engineering techniques it supports, whether they can bring in their own pre-trained models, and so on.
Then there is the software developer group. These users are generally familiar with building web and mobile applications, and are more concerned with whether the platform integrates with the desired data sources, and if the provided programming interfaces and built-in algorithms are sufficient to build certain applications. They want to know how to retrieve model results, whether model versioning is supported, if there is a software design pattern to be followed, and so on.
To implement an AI platform successfully for your organization, you must truly understand your users and do the right heavy lifting accordingly. For example, there are many data scientists who prefer to fine-tune every algorithm parameter manually, but if your users expect out-of-the-box regression that just works, then automated model tuning may become an essential technology of the platform. You want to help these users avoid the hassle of tuning the regularization parameters so they can focus on their top priorities.
Are you solving for simplicity or flexibility?
You may wonder why it is so difficult to build a single AI platform that serves two or more personas well. Can’t we simply offer more functionality on the platform? The problem boils down to the tough choice between simplicity and flexibility. It is more of an art than a science to determine which parts should be abstracted away for simplicity and which parts should be made customizable for flexibility.
For some users, an ideal platform is one that abstracts away all data science details. Many software engineers happily utilize the power of Salesforce Einstein's deep learning APIs to recognize objects in images and to classify sentiment in text without worrying about how the AI model is built, or even which algorithm is being used behind the scenes.
For other users, an ideal platform is one that allows a maximum level of flexibility. Many software engineers like to build completely custom AI engines on Apache PredictionIO. They get their hands dirty modifying the Spark ML pipelines and enjoy the freedom to tailor-make and fine-tune every component— from data preparation and model selection to real-time serving logics— in order to create a unique AI use case.
How do you balance product and engineering decisions?
As an AI platform is adopted by more and more users, many tough but interesting product and engineering decisions are revealed. What should the platform measure? Should it offer built-in metrics? How should it handle the very different requirements for AI R&D, development and production purposes? What’s the cost versus scalability strategy? Should the platform be cloud agnostic? Should visualization tools be part of the platform? To answer these questions most effectively, you must focus on one complete use case for one type of user at a time, starting with the highest priorities for the business.
What Is your multi-layer approach?
Sometimes, the reality is that you do need to construct AI offerings for various types of users. In that case, the separation of offerings must be explicit.
For instance, the Salesforce Einstein artificial intelligence layer has three main components. First, there are several independent services relating to machine learning development. One service is for executing resource-intensive jobs, and is responsible for intelligently allocating and managing distributed computing units to each job. Another service is for scheduling jobs, managing their dependencies, and monitoring the status. These low-level services give data scientists and software engineers the maximum flexibility to build AI solutions in whatever ways they like.
Second, there is an application framework that standardizes the design pattern for some types of common AI applications— specifically in Salesforce’s case, multitenant AI applications. Users will still need to write code, but they write far less of it because many common functionalities are abstracted away. In exchange for some flexibilities, the platform offers resilience and scalability to the AI applications built on top of it.
Third, APIs and user interfaces are provided so that the platform can be useful to users who write very little code, or even no code, to build AI applications.
Conclusion
Companies that do not thoroughly think through their AI strategies often swing from one direction to another one. They are chasing the wind. The growing demand for AI platforms to serve various types of development is inevitable in the foreseeable future, and the right solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
Continue reading Key considerations for building an AI platform.
http://ift.tt/2z66lEj
0 notes
Text
Key considerations for building an AI platform
The right AI solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
The promises of AI are great, but taking the steps to build and implement AI within an organization is challenging. As companies learn to build intelligent products in real production environments, engineering teams face the complexity of the machine learning development process—from data sourcing and cleaning to feature engineering, modeling, training, deployment, and production infrastructure. Core to addressing these challenges is building an effective AI platform strategy—just as Facebook did with FBLearner Flow and Uber did with Michelangelo. Often, this task is easier said than done. Navigating the process of building a platform bears complexities of its own, particularly since the definition of “platform” is broad and inconclusive. In this post, I'll walk through the key considerations of building an AI platform that is right for your business, and avoiding common pitfalls.
Who will use the platform?
Machine learning platforms are often casually advertised as designed for both software engineers and data scientists. Most of them, however, fail to address both roles well and at the same time. Even worse, they don’t offer enough value to either side to be useful for real work. My experience in building PredictionIO and contributing to Salesforce Einstein AI has helped me understand two distinct groups of practitioners who have diverged sets of requirements in mind.
First, there is the data scientist group. These users usually have a math and statistics background, and are heavy users of tools like R and Python's scientific packages and data visualization tools. This group is responsible for analyzing data and tuning models for the best accuracy, so they’re concerned about whether the platform supports a specific class of algorithms, that it works well with data analysis tools they are already familiar with, and that it integrates with the visualization tools they use. They also want to know what feature engineering techniques it supports, whether they can bring in their own pre-trained models, and so on.
Then there is the software developer group. These users are generally familiar with building web and mobile applications, and are more concerned with whether the platform integrates with the desired data sources, and if the provided programming interfaces and built-in algorithms are sufficient to build certain applications. They want to know how to retrieve model results, whether model versioning is supported, if there is a software design pattern to be followed, and so on.
To implement an AI platform successfully for your organization, you must truly understand your users and do the right heavy lifting accordingly. For example, there are many data scientists who prefer to fine-tune every algorithm parameter manually, but if your users expect out-of-the-box regression that just works, then automated model tuning may become an essential technology of the platform. You want to help these users avoid the hassle of tuning the regularization parameters so they can focus on their top priorities.
Are you solving for simplicity or flexibility?
You may wonder why it is so difficult to build a single AI platform that serves two or more personas well. Can’t we simply offer more functionality on the platform? The problem boils down to the tough choice between simplicity and flexibility. It is more of an art than a science to determine which parts should be abstracted away for simplicity and which parts should be made customizable for flexibility.
For some users, an ideal platform is one that abstracts away all data science details. Many software engineers happily utilize the power of Salesforce Einstein's deep learning APIs to recognize objects in images and to classify sentiment in text without worrying about how the AI model is built, or even which algorithm is being used behind the scenes.
For other users, an ideal platform is one that allows a maximum level of flexibility. Many software engineers like to build completely custom AI engines on Apache PredictionIO. They get their hands dirty modifying the Spark ML pipelines and enjoy the freedom to tailor-make and fine-tune every component— from data preparation and model selection to real-time serving logics— in order to create a unique AI use case.
How do you balance product and engineering decisions?
As an AI platform is adopted by more and more users, many tough but interesting product and engineering decisions are revealed. What should the platform measure? Should it offer built-in metrics? How should it handle the very different requirements for AI R&D, development and production purposes? What’s the cost versus scalability strategy? Should the platform be cloud agnostic? Should visualization tools be part of the platform? To answer these questions most effectively, you must focus on one complete use case for one type of user at a time, starting with the highest priorities for the business.
What Is your multi-layer approach?
Sometimes, the reality is that you do need to construct AI offerings for various types of users. In that case, the separation of offerings must be explicit.
For instance, the Salesforce Einstein artificial intelligence layer has three main components. First, there are several independent services relating to machine learning development. One service is for executing resource-intensive jobs, and is responsible for intelligently allocating and managing distributed computing units to each job. Another service is for scheduling jobs, managing their dependencies, and monitoring the status. These low-level services give data scientists and software engineers the maximum flexibility to build AI solutions in whatever ways they like.
Second, there is an application framework that standardizes the design pattern for some types of common AI applications— specifically in Salesforce’s case, multitenant AI applications. Users will still need to write code, but they write far less of it because many common functionalities are abstracted away. In exchange for some flexibilities, the platform offers resilience and scalability to the AI applications built on top of it.
Third, APIs and user interfaces are provided so that the platform can be useful to users who write very little code, or even no code, to build AI applications.
Conclusion
Companies that do not thoroughly think through their AI strategies often swing from one direction to another one. They are chasing the wind. The growing demand for AI platforms to serve various types of development is inevitable in the foreseeable future, and the right solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
Continue reading Key considerations for building an AI platform.
from FEED 10 TECHNOLOGY http://ift.tt/2z66lEj
0 notes
Text
Key considerations for building an AI platform
Key considerations for building an AI platform
The right AI solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
The promises of AI are great, but taking the steps to build and implement AI within an organization is challenging. As companies learn to build intelligent products in real production environments, engineering teams face the complexity of the machine learning development process—from data sourcing and cleaning to feature engineering, modeling, training, deployment, and production infrastructure. Core to addressing these challenges is building an effective AI platform strategy—just as Facebook did with FBLearner Flow and Uber did with Michelangelo. Often, this task is easier said than done. Navigating the process of building a platform bears complexities of its own, particularly since the definition of “platform” is broad and inconclusive. In this post, I'll walk through the key considerations of building an AI platform that is right for your business, and avoiding common pitfalls.
Who will use the platform?
Machine learning platforms are often casually advertised as designed for both software engineers and data scientists. Most of them, however, fail to address both roles well and at the same time. Even worse, they don’t offer enough value to either side to be useful for real work. My experience in building PredictionIO and contributing to Salesforce Einstein AI has helped me understand two distinct groups of practitioners who have diverged sets of requirements in mind.
First, there is the data scientist group. These users usually have a math and statistics background, and are heavy users of tools like R and Python's scientific packages and data visualization tools. This group is responsible for analyzing data and tuning models for the best accuracy, so they’re concerned about whether the platform supports a specific class of algorithms, that it works well with data analysis tools they are already familiar with, and that it integrates with the visualization tools they use. They also want to know what feature engineering techniques it supports, whether they can bring in their own pre-trained models, and so on.
Then there is the software developer group. These users are generally familiar with building web and mobile applications, and are more concerned with whether the platform integrates with the desired data sources, and if the provided programming interfaces and built-in algorithms are sufficient to build certain applications. They want to know how to retrieve model results, whether model versioning is supported, if there is a software design pattern to be followed, and so on.
To implement an AI platform successfully for your organization, you must truly understand your users and do the right heavy lifting accordingly. For example, there are many data scientists who prefer to fine-tune every algorithm parameter manually, but if your users expect out-of-the-box regression that just works, then automated model tuning may become an essential technology of the platform. You want to help these users avoid the hassle of tuning the regularization parameters so they can focus on their top priorities.
Are you solving for simplicity or flexibility?
You may wonder why it is so difficult to build a single AI platform that serves two or more personas well. Can’t we simply offer more functionality on the platform? The problem boils down to the tough choice between simplicity and flexibility. It is more of an art than a science to determine which parts should be abstracted away for simplicity and which parts should be made customizable for flexibility.
For some users, an ideal platform is one that abstracts away all data science details. Many software engineers happily utilize the power of Salesforce Einstein's deep learning APIs to recognize objects in images and to classify sentiment in text without worrying about how the AI model is built, or even which algorithm is being used behind the scenes.
For other users, an ideal platform is one that allows a maximum level of flexibility. Many software engineers like to build completely custom AI engines on Apache PredictionIO. They get their hands dirty modifying the Spark ML pipelines and enjoy the freedom to tailor-make and fine-tune every component— from data preparation and model selection to real-time serving logics— in order to create a unique AI use case.
How do you balance product and engineering decisions?
As an AI platform is adopted by more and more users, many tough but interesting product and engineering decisions are revealed. What should the platform measure? Should it offer built-in metrics? How should it handle the very different requirements for AI R&D, development and production purposes? What’s the cost versus scalability strategy? Should the platform be cloud agnostic? Should visualization tools be part of the platform? To answer these questions most effectively, you must focus on one complete use case for one type of user at a time, starting with the highest priorities for the business.
What Is your multi-layer approach?
Sometimes, the reality is that you do need to construct AI offerings for various types of users. In that case, the separation of offerings must be explicit.
For instance, the Salesforce Einstein artificial intelligence layer has three main components. First, there are several independent services relating to machine learning development. One service is for executing resource-intensive jobs, and is responsible for intelligently allocating and managing distributed computing units to each job. Another service is for scheduling jobs, managing their dependencies, and monitoring the status. These low-level services give data scientists and software engineers the maximum flexibility to build AI solutions in whatever ways they like.
Second, there is an application framework that standardizes the design pattern for some types of common AI applications— specifically in Salesforce’s case, multitenant AI applications. Users will still need to write code, but they write far less of it because many common functionalities are abstracted away. In exchange for some flexibilities, the platform offers resilience and scalability to the AI applications built on top of it.
Third, APIs and user interfaces are provided so that the platform can be useful to users who write very little code, or even no code, to build AI applications.
Conclusion
Companies that do not thoroughly think through their AI strategies often swing from one direction to another one. They are chasing the wind. The growing demand for AI platforms to serve various types of development is inevitable in the foreseeable future, and the right solution is the one that fits the skill set of the users and solves the highest-priority problems for the business.
Continue reading Key considerations for building an AI platform.
http://ift.tt/2z66lEj
0 notes