dallarosa
dallarosa
Dalla Rosa
62 posts
Talking about technology, programming and sometimes about random stuff of my life
Don't wanna be here? Send us removal request.
dallarosa · 6 years ago
Text
Running TensorFlow 2 with GPU support on Ubuntu 18.04
This is for most of you out there who have spent many hours trying to setup things and screwing up every possible setting out there.
Let’s do this quick and dirty!
But before that a few assumptions:
Make sure you have a NVIDIA compatible graphics board. If you don't just stop reading this and go order one on Amazon or the shop of you preference. Be back after you have it in your PC.
Make sure you have python 3 installed
Installing CUDA
I'm also going to assume you already have the NVIDIA repositories installed in your machine. If you don't, check it out here.
Now add the repositories for CUDA, following the instruction here.
Important:
From the TensorFlow developers:
Official tensorflow-gpu binaries (the one downloaded by pip or conda) are built with cuda 9.0, cudnn 7 since TF 1.5, and cuda 10.0, cudnn 7 since TF 1.13. These > are written in the release notes. You have to use the matching version of cuda if > using the official binaries.
In my case I'm using the latest TensorFlow 2.0 as of now (2.0.0-beta1) and it is linked to CUDA 10.0 As I can't tell which version of TensorFlow you will be installing, check the release notes for more information and add the proper version of the repository.
Now install the necessary CUDA libraries by running:
sudo apt install cuda-toolkit-10-0
Next you'll have to install cuDNN. To add the repository run:
sudo echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 /" | sudo tee /etc/apt/sources.list.d/nvidia-ml.list sudo apt update sudo apt install libcudnn7=7.6.1.34-1+cuda10.0 sudo apt install libcudnn7-dev=7.6.1.34-1+cuda10.0
Install TensorFlow
To install TensorFlow with GPU support just run:
pip3 install tensorflow-gpu==2.0.0beta1
You're done!
Let me know if you have any problems in the comments
2 notes · View notes
dallarosa · 7 years ago
Text
Running fast.ai in Colab + Google Drive
Background
fast.ai is with no doubt one of the best resources out there for anyone interested in getting into Deep Learning with a top-down hands-on approach.
They provide an interesting platform with a video lesson going through the contents and a Jupyter Notebook for you to actually get around coding stuff. To get you all setup for the coding part they provide detailed installation steps for using a few cloud providers (AWS, Crestle, Paperspace) and also have a Kaggle kernel that you can just clone.
However many of us don't feel like paying for a cloud provider and for some reason might not want to use the Kaggle platform, or maybe you're just adventurous and love colab like me feel compelled to see if you can actually make it work. So let's get into the actual work.
This will be divided in basically 2 steps:
Getting the files into Google Drive (should be the simple part)
 Setting up the Notebook to work with Colab and Google Drive (not so simple)
1. Google Drive
There are a few ways of doing this but mainly what you have to do is:
Clone the git repository https://github.com/fastai/fastai.git
Upload the folder to Google Drive
Upload any other data necessary for the lessons to an accessible place (dogs and cats in the case of the first lesson)
In my case, I wanted to make it easier to sync the repository so I just mounted Google Drive in my machine (you can get the software here) and cloned the repository in the mount folder. Then created a folder in there for the cats and dogs data and unzipped it all there. Boom, just let Drive sync everything automatically (some 30,000 files) and you're done. Obviously you can clone the repository and extract the data and then upload all that through the web interface by just dragging and dropping the folders.
2.  Setting up the Colab Notebook
After you upload all the files to Google Drive you can just go to fastai/courses/dl1 and open the notebook called lesson1.ipynb
To make things less confusing below is image of how the fully setup notebook looks like:
And now to the details.
Reading data from Google Drive (Mounting the drive)
First you'll need to enable Colab to read your drive data. I found so many different ways involving downloading packages and an OCaml driver but after out of curiosity opening the "Code Snippets" tab in Colab itself I found the correct answer:
   from google.colab import drive    drive.mount('drive')
This will mount your Google Drive to a folder called drive and it will appear on the left-side panel, in the "Files" section.
Installing the required libraries - Part 1
There's a bunch of libraries that are required to run the notebook. I won't go too deep into this. You're always welcome to Google the package names to learn what they are about
!pip3 install http://download.pytorch.org/whl/cu80/torch-0.4.1-cp36-cp36m-linux_x86_64.whl !pip3 install torchvision !pip install bcolz !pip install graphviz !pip install sklearn_pandas !pip install isoweek !pip install pandas_summary !pip install ipywidgets
Installing the required libraries - Part 2
This could actually be all put together but I separated these just to for the sake of better understanding (and remembering) that the ordering of these actually mean something.
!pip install fastai==0.7 !pip install torchtext==0.2.3 !pip install Pillow==4.0.0
When you pip install a version of a library it will upgrade libraries according to its dependencies. The problem is that sometimes you get into some conflicts.
fastai has to be version 0.7 as this is the latest stable release. The present course is based on this version. Not setting a version number will cause pip to install version 1.x
fastai requires version 0.2.3 of torchtext or it will throw errors when trying to import fastai libraries
When you install the libraries above (I believe pytorch, not sure) they end up updating Pillow from the preinstalled version 4 to version 5, however that ends up causing an error inside Colab and the easiest solution is to just downgrade it to version 4 after everything is done.
Ok! You should be good to go now!
Bonus: Setting the path for the Dogs and Cats task
For those of you who got stuck on how to specify the path to your image folder, here is a tip :)
When you mount it using external libraries your Drive content will be directly under the drive folder, however using the "official" way I found out that it creates another folder "My Folder". I haven't tested but this might change depending on your locale. The best way to check this is to click on the drive folder on the left pane and check for yourself. My path looked like the one below as I extracted the picture data inside of a folder called data inside of the dl1 folder.
Ok, now you're all set for real. Now you can enjoy fast.ai lessons inside of Colab as well anywhere in the world without having to pay for an cloud platform.
This is no expert work. This was based on a lot of trial and error. Feel free to comment with better ideas or correct me if I made any mistakes. I might come back to this article and edit a few things as I have more time to figure things out and understand them better.
I'd like to give credit to this article which was the base for most of my work. It was really just a lot of tweaking around the contents here (in Japanese)
0 notes
dallarosa · 7 years ago
Text
Setting up Jupyterhub in Debian
Note
When I first wrote the draft for this article a few months ago I still didn't no about the public version of Colab. For most of you out there I strongly recommend using Colab as it's free,supports realtime collaboration and Google Drive to save and organize your notebooks as well as it supports GPU acceleration (for free!)
Back to the original post
So I’ve started taking the Deep Learning specialization at Coursera and I’ve come to the conclusion that it would be a lot better to have my own Jupyter notebooks to test stuff as I go through the videos as I’m not that well versed on numpy. 
One of my needs was that I work across multiple computers and I wanted to have access to the same notebooks wherever I am. I found out that the solution for me was something called jupyterhub which pretty much provides an external interface with a proper login system to an individual user’s notebooks.
I’ve faced a bunch of small problems during the process and I hope to save the trouble to all the people out there looking to create a similar setup, be it for personal use or to their small organization.
My setup
I’m using a one server machine (VPS) with Debian Unstable. This is my server for personal projects and for playing around both with sysadmin stuff as well as small programming experiments. 
Getting ready to install Jupyter.
1. Install Python3 and pip3
You will need python 3.x to work with jupyterhub. It does not come pre-installed on Debian, so as root use the command below to install python3 and pip3
https://gist.github.com/dallarosa/7e3c6b48e6c54d32f19f24a5b95ac389
2. Install Jupyter and Jupyterhub
Now let's use pip3 to install Jupyter Notebooks and Jupyterhub.
https://gist.github.com/dallarosa/44aea3b558d7cdf422ff93368755c4fd
pip3 install --upgrade notebook jupyterhub
3. Running Jupyterhub To run jupyter just use:
url here
https://gist.github.com/dallarosa/955b3eb20308938792dc9823b6179919
If you don't feel like running jupyter as root with root priveleges, you can check out the Jupyterhub wiki on how to setup your system.
0 notes
dallarosa · 7 years ago
Text
Golang: Converting Hexadecimal code to String
Just a simple piece of code showing how easy it is to do something like converting a string of hexadecimal to text.
https://gist.github.com/dallarosa/6806fad172c836782814723087802c94
0 notes
dallarosa · 7 years ago
Text
Audio Spectrograms with Python, for Machine Learning
Lately I’ve been getting back to my roots and decided to learn and try my hand in the latest trend in Machine Learning: Neural Networks. As a start I’ve been going through Andrew Ng’s Deep Learning Specialization on Coursera.
While still going through the courses I’ve decided to play around with my own model: A model able to classify across a few different music genres. 
Well, as anyone who has already worked with any kind of Machine Learning knows, every project starts not with coming up crazy algorithms but with preprocessing the data and getting it ready for learning.
Reading around, I’ve found out that one of the best ways to work with audio data is to first transform it into spectograms and then learning on the spectogram images. 
Without further ado, here’s the script, based on numpy, scipy and matplotlib. This has only been tested with Python 3.
https://gist.github.com/dallarosa/a2e129a59fa9845940e8e98d958ca603
I’ve seen both reading around as well as mentioned in some of Andrew’s lessons that for speech and text processing RNNs peform well, however I haven’t gotten that far (actually, I’m still about to start the CNN course, I know, such a noob) so for now I’m just working with the tools I have in hand.
Feel free to utilize the code and to give me any tips you may have.
0 notes
dallarosa · 8 years ago
Text
Golang: Named result parameters
TL;DR
If your functions use named result parameters, you can return the values by just executing a return statement without any parameters.
Long version
So, I have a confession to make. And I think it’s gonna be a pretty damning one.
I’ve been using Go for a while now and it has kind of become my main language from scripting to web apps to sysops kind of tools. Of course I knew of the existence of named result parameters and had always been a fan of them. I believe that having predefined result variable names (and values) are great both for the one implementing the functions as well as documentation for those reading your documentation/code. Now here’s the confession: I didn’t really know could return the current value of those named result parameters by just executing a return statement without any parameters. I know, heresy right? The fact that I was both naming the return variables and passing them as parameters to the return statement always felt weird until the other day I was going through Effective Go again and I found the Named result parameters section. 
The moral of the story? R.T.F.M.
Tumblr media
0 notes
dallarosa · 8 years ago
Video
youtube
Normally this blog was mostly focused on tech stuff, however as this is _my_ personal blog, I’ll take the liberty to go ahead with a different topic.
So, maybe you didn’t know but apart from having been an engineer for most of my working life, I’m also an amateur salsa dancer. I’ve started dancing around 4 years ago (without any prior dance experience) and after many team choreographies I’ve had my first shot at a pair performance. The interesting thing was, this was not only my first try as a pair, this has also been the first attempt to also come up with the choreography by myself (with my partner, of course). 
Dancing some else’s choreography is “easy”: I never thought so, however after actually having to first go through the creative process of coming up with your own and then being able to actually execute what you came up with is a totally different level of difficulty. It’s like I’ve been playing this game in normal mode all of a sudden I decided to skip “Hard” and went straight to”Nightmare” or something.
It can get really frustrating. There’s this whole list of things that go wrong:
Not understanding yourself
Not understanding your partner
Pieces and bits that look really well by themselves but suck as a set
I won’t go to far though. Specially because most of it can be explained by the first two.
An interesting thing I found out however, was that coming up with your own choreography isn’t too different from coming up with a new service or doing scientific research: First you come up with a topic or concept of what you want to do. Then you research what already exists out there and get inspired, reuse some ideas already existing and then add your unique touch to it.
I also saw a glimpse of what’s partnering with someone to build something. Be it dance or a startup or just a weekend hack, doing something with someone else can be very hard just because of the fact that you two are different persons with different views and mindsets. You are looking at the same thing, you have the same goal but your ways of realizing that goal can be very different. So learning how to deal with those differences can be very challenging, however I do believe those are great opportunities for growth.
I believe this experience has changed me as a person and I’m looking forward to what comes next :)
0 notes
dallarosa · 9 years ago
Text
Apple supporting  HTTP v2
While reading more about Polymer’s new CLI tool, specially about building and deploying, I’ve come across the following:
build/unbundled. Contains granular resources suitable for serving via HTTP/2 with server push. build/bundled. Contains bundled (concatenated) resources suitable for serving from servers or to clients that do not support HTTP/2 server push.
As iPhone/Safari has a fair market share specially in mobile, I was curious to know if Apple had decided to support http v2 and it was happy surprise to learn that yes! Apple has being supporting http v2 since iOS 9 and OS X 10.11.
With that knowledge in hand I think I can feel safe enough to use the unbundled version instead of the bundled version.
0 notes
dallarosa · 9 years ago
Photo
Tumblr media
“Emacs vs. Vim" 
Goya, 1820–1823 Oil mural transferred to canvas.
(collaboration from Vicent)
216 notes · View notes
dallarosa · 11 years ago
Quote
Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged - the compiler knows the types anyway and can check those, and it only confuses the programmer. No wonder MicroSoft makes buggy programs.
https://www.kernel.org/doc/Documentation/CodingStyle
0 notes
dallarosa · 12 years ago
Text
A member of our community is missing, help find him
Luke Arduini has been missing since Jan 1. Co-maintainer of npm and a long-term, valued member of the Node.js community, his absence is deeply felt by many of us.
If you have any information, please contact Oakland PD: 510 238 3641 or P.I. Jim Vierra 415 999 5911.
Tumblr media
Luke, if you’re out there, please come back, we miss you!
48 notes · View notes
dallarosa · 12 years ago
Text
Short Post: Writing cross-platform code in Go
Lately I've been working on this little project in Go that involves system calls and some other more low level features of the OS. The project is supposed to support both Linux and Mac OSX / BSD so I was wondering how to separate code for those platforms. I found the answer in the Go source: Just name the files according to the target platform!
Example: 
Let's say we have a project called "Platform Test" and we create the folder platform_test for it. In that folder you'll have the following files
test_darwin.go
package main import "fmt" func main() { fmt.Println("I'm on mac") }
and
test_linux.go
package main import "fmt" func main() { fmt.Println("I'm on linux") }
Now, if you run 
go build
in platform_test from your mac, you'll have an executable called "platform_test" that prints
I'm on mac
and 
I'm on linux
if compiled on  a linux machine
I gotta say, this was a pretty neat way to implement cross-platform support.
8 notes · View notes
dallarosa · 12 years ago
Link
Missing some real world community around the Go language in Tokyo, I've decided to create the Tokyo Golang Developers meetup. It is a meetup for anyone with interest in the go programming language, be it a super hacker or complete beginner. We're having (hopefully) our first meetup on the 28th in Shibuya and welcome all of you Tokyo gophers!
0 notes
dallarosa · 12 years ago
Note
Hello. I saw your post about slackware on zenbook 3 months ago. I am currently thinking about buying it for usage with some GNU Linux distro so I want to ask a few questions. Do you have any unresolved problems on it? Does it need any non-free software/firmware/drivers to work properly? Is battery life good? Also I saw a post on 4chan/g/ a few days ago where someone said that he was using slackware on this zenbook. Could that be you?
Hey!
 Do you have any unresolved problems on it?>
Actually I do. I still have to write about it but it was kind of painful to get UEFI to work properly and even now I still have a kind of weird boot flow.
Another thing is the Ambient Light Sensor. The asus driver doesn’t know how to handle it and keeps sending nulls to the standard output. It doesn’t really affect most applications but you might notice some weird stuff (Google Drive shows the help dialog for no reason, vim leaving insert mode for no reason). Anyways, you can solve that by putting a piece of tape on the sensor :p My goal for the end of the year holidays is to do something about that driver. 
Does it need any non-free software/firmware/drivers to work properly?>
For the touchscreen, I had to find the maker’s page and download their software: http://home.eeti.com.tw/DriverDownload.html
Is battery life good?>
Battery life is great. I make between 7.5 to 8 hours depending on what I’m doing.
Could that be you?>
I don’t really use 4chan so it wasn’t me. I actually had a guy on my blog asking for a write up on the UEFI and all other problems so it could be him :)
1 note · View note
dallarosa · 12 years ago
Text
Bluetooth headset on Linux
One of the Holy Grails for electronic device users is eliminating cables and cords. We first started with remote controls, infrared based mouses, wireless and bluetooth devices.
Even though past are the days when it was difficult to connect to wireless or transfer files to/from your pc using bluetooth, a topic that remains full of mistery and problems is bluetooth audio. 
I bought my first bluetooth headset a few years ago and I tried to connect it to my Slackware notebook and I obviously failed miserably.
After many years I decided to buy a HiFi Wireless Headset, something that would allow to both listen to music and take calls with fair quality. After looking around for a while, I decided to with a friends recommendation and bought a Sony Ericcson MW600/B Wireless headset
Tumblr media
.
Taking a few lines to talk about the device itself, I just love it. The sound quality for both playing music and calling is great, plus the fact that the earphones are not built-in in the device so if you have a more expensive and higher quality pair of earphones you can use those. The battery life is also great. Sony claims that it can do around 8.5 hours of music playback and I can back that. I can leave to work at 10am today with a full charge, spend a big part of my day listening to music and they’ll only die on me the next day if I don’t charge them during the night. Charging only takes around 2.5 hours. Also it can pair with up to 3 devices (it can connect to up to 2 devices at the same time - one in headset mode and the other  in playback mode)
OK! Let’s get over with the digression and back to the main thing of this post, which is talking about how to get this thing working on your Linux machine.
If you have a fairly new distribution running you won’t have problems pairing your device. Just put and pairing mode and let your Bluetooth manager to the rest. 
On KDE, just Alt+F2 to open the “Run Command” dialog and type Bluetooth Devices (probably bluetooth will be enough)
Tumblr media
Use the “search” and “setup” buttons to properly setup your device. I guess you want to use it to listen to audio so much make sure to set it up as a “A2DP Sink (Send audio)” (in some places it might be written as “Audio Sink”).
Ok! After doing that your device is properly connected to your machine and ready to play some music! Then you try it….just to see it fail. What do you have to do? 
Actually we’ve already had to deal with the something similar when we fixed the problems with HDMI audio. 
We need to let ALSA know that the bluetooth device is there and can be used (or in case you want to, you make it the default device). Let’s see how we do that. 
As root, open the file /etc/asound.conf and add the following lines
What you’re doing is, first adding a control device of the type bluetooth and a pcm device. That should already be enough for you to be able to choose the headset (it will be called “btheadset”) to be the playback device.
Lastly, like with HDMI, applications like Chrome and the Flash Player, they will use the default pcm device so you’ll have to change that too:
pcm.!default { type plug slave.pcm "btheadset" }
The lines above go into the .asoundrc, in your home directory. (Check the HDMI article for more details)
0 notes
dallarosa · 12 years ago
Link
It's been a while since my last post but today I'm here to do some advertising!
Since July I've taken over the Tokyo Android Developers meetup and I'd like to invite of all you who are in the Tokyo area and are interest in Android development to show up if you can!
Our next meetup will be on the 29th of October with a talk by me about the basics of Android development, and a talk about how to develop Android apps using Scala, by Devon Stewart, the guy behind the Tokyo Scala Meetup 
Hope to see you there!
The Android Developer's Tokyo Meetup is a professional organization whose members include anyone that is interested in Android Technology and Development. We hope our group will include many experienc
0 notes
dallarosa · 12 years ago
Photo
Tumblr media
Slackware on ASUS Zenbook Prime UX31A.
0 notes