Photo
Got arch running on WSL!
2 notes
·
View notes
Link
Def trying this
exa is a modern replacement for ls and written in rust. Try it on Linux/Unix/macos.
15 notes
·
View notes
Link
See Food! #SiliconValley #HBO
Researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory are creating a computer system modeled after the human brain to examine photos of food on social media and break them down into recipes.
406 notes
·
View notes
Video
I wrote a program to change the colors on my keyboard. #imawesome #razer #razerchroma #rgbkeyboard #mechanicalkeyboard #developer #crystallanguage
0 notes
Photo
Can’t get enough of circular / geometric loading GIFs.
351 notes
·
View notes
Photo

In What a Deep Neural Network thinks about your #selfie Andrej Karpathy runs two million photographs through a neural network hoping to discover what makes a good #selfie? (Photo: Yiannis Yiasaris)
111 notes
·
View notes
Photo
Google has released an English parser called Parsey McParseface. Despite the name, the parser is entirely serious - here’s part of their description of it:
One of the main problems that makes parsing so challenging is that human languages show remarkable levels of ambiguity. It is not uncommon for moderate length sentences - say 20 or 30 words in length - to have hundreds, thousands, or even tens of thousands of possible syntactic structures. A natural language parser must somehow search through all of these alternatives, and find the most plausible structure given the context. As a very simple example, the sentence Alice drove down the street in her car has at least two possible dependency parses:
The first corresponds to the (correct) interpretation where Alice is driving in her car; the second corresponds to the (absurd, but possible) interpretation where the street is located in her car. The ambiguity arises because the preposition in can either modify drove or street; this example is an instance of what is called prepositional phrase attachment ambiguity. Humans do a remarkable job of dealing with ambiguity, almost to the point where the problem is unnoticeable; the challenge is for computers to do the same. Multiple ambiguities such as these in longer sentences conspire to give a combinatorial explosion in the number of possible structures for a sentence. Usually the vast majority of these structures are wildly implausible, but are nevertheless possible and must be somehow discarded by a parser.
686 notes
·
View notes
Link
by Sveta McShane
A few years ago, researchers from Germany and Japan were able to simulate one percent of human brain activity for a single second. It took the processing power of one of the world’s most powerful supercomputers to make that happen.
Hands down, the human brain is by far the most powerful, energy efficient computer ever created.
So what if we could harness the power of the human brain by using actual brain cells to power the next generation of computers?
As crazy as it sounds, that’s exactly what neuroscientist Osh Agabi is building. Koniku, Agabi’s startup, has developed a prototype 64-neuron silicon chip.
Their first application? A drone that can smell explosives.
A drone with a bee’s sense of smell
A bee can navigate incredibly well because of its powerful ability to detect and interpret smells. Agabi was fascinated by this and wondered, “What if we could extract just the part responsible for that in the bee and put it in a drone? Suddenly we have a drone which has a sense of smell which equals that of bee.”
Synapses firing
Such a drone would be able to smell bombs several kilometers away, says Agabi. It could also be used for surveying farmland, refineries, manufacturing plants — anything where health and safety can be measured by an acute sense of smell. There are no silicon devices which are able to give us the level of sensitivity that we find in biology.
How does the Koniku chip actually work? Let’s dig into it.
Keep reading
171 notes
·
View notes
Photo

My wife updated to iOS 10 yesterday and what's the first thing she does?
0 notes
Link
Warning: Rants ahead! I'm going to start this blog off by offering up a project of mine that I've been working on for the past couple months. As anyone who works on the Salesforce platform knows, developing JavaScript heavy applications that run natively in Visualforce can be a very time consuming and complicated task. Especially so if you want to be able to do any local testing. As an example for anyone that isn't a Salesforce developer, or that hasn't tried to bring Salesforce and JavaScript together, I just finished working on a monolith of a project that included a Salesforce community, 2 large Visualforce pages, over 6,000 lines of Apex, and a very large Angular 1.5 application. When I arrived on the project there was no version control, so all merging was done manually with WinMerge; there was no task runner handling our deployments so everyone was forced to work on Welkin, Eclipse, or MavensMate; and to top it all off we weren't even the creators of the Angular app that we were working with, it was given to us by an outside developer that had 2 weeks to throw something together. Better yet we also had, and have, no local development workflow set up. When we want to make a change we have to write the change into the code, deploy the code to Salesforce, and then refresh the app. The whole process takes upwards of 3 minutes every time. As a JavaScript developer with 7 years of experience this was obviously a frustrating workflow. I'm accustomed to being able to make a change, have my code transpired, and have the browser auto refresh a few seconds later. Unfortunately due to the nature of Salesforce this is a hard thing to accomplish. So I dedicated 100+ hours of my personal time to making it so the next time we have a project like this we can avoid most of the Salesforce pitfalls and focus on doing what we do best, building beautiful websites. So was born my salesforce-angular2-boilerplate. Simply put it is a mock Angular 2 contact management application that allows you to develop locally using a local development server; when the time is right and you are ready to run the app on Salesforce all you have to do is run the `gulp deploy` task and your application will be packaged up and sent to Salesforce in the form of 2 Static Resources and 1 Visualforce page. Locally the app uses the Salesforce SOAP API to make webservice callouts to an Apex controller. When the app is put on Salesforce the app switches to the JavaScript Remoting API which doesn't count against API limits. Please check it out and let me know what you think!
0 notes