kevinkatzke
kevinkatzke
Kevin Katzke
9 posts
Articles, Tutorials and Insights about Automotive SW-Development, Robot-Navigation, Deep Learning and many more using C++, Python and JavaScript.
Don't wanna be here? Send us removal request.
kevinkatzke · 7 years ago
Text
TensorFlow Basics - Second Part
Placeholder Variables
We already talked about Variables and Constants in the first part of TensorFlow Basics. In this part we extend the topic by introducing TensorFlow placeholders. The placeholders give us the chance to define the computation graph without specifying congrete input values. It is sufficient to declare the Tensortype of the data that we want to work with. As expected the placeholder variable can the be used for any further computation such as in a multiply operation as shown in the following script:
import tensorflow as tf input1 = tf.placeholder(tf.float32) input2 = tf.placeholder(tf.float32) output = tf.multiply(input1, input2) with tf.Session() as sess: out = sess.run([output], feed_dict={input1:[2.0], input2:[3.0]}) print(out) #=> [array([ 6.], dtype=float32)]
As you can see, using placeholder variables is intuitively easy. The one thing that is important to mention is that any placeholder need to replaced by a real value when it is processed in a Sesson.
To do that we simply use the feed concept, that we have discussed in the first part of the tutorial. In sess.run() we pass two different arguments. First we pass the variable whose output we want to fetch from the Session run (in this case we want to fetch the result of the placeholder computation -> output)
Variable Scope
As it is always in programming, small side-projects can grow to have hundreds and thousands of lines of code very quickly. And as in many other programming languages TensorFlow provides a feature called the Variable Scope mechanism to group all the variables in namespaces to avoid clashes.
The Tensorflow Varibale Scope feature consists of two main functions:
tf.get_variable(, , )
tf.variable_scope()
In a very complex Deep Learning Model you may have different variable scopes, one variable scope for the convolutional part, one for the fully connected part and maybe even a third or fourth for a recurrent part of your network architecture. The easy to use TensorFlow variable scope command helps you do organize your model structure in logical parts using the same names for similar concepts i.e W for a weight variable and b for biases without mixing different parts up.
- tf.get_variable(, , )
tf.get_variable(, , ) will create a varible with the specified name if such a variable does not already exist and it will access that variable if it finds it to exist. Accessing an existing varibale requires an
- tf.variable_scope()
tf.variable_scope() works closely with the tf.get_variable. It lets you define the namespaces mentioned above. In the next example you can see that in Tensorflow you cannot define two varibales of the same name in one namespace:
import tensorflow as tf v = tf.get_variable("var", [1]) v = tf.get_variable("var", [1]) #ERROR! #=> ValueError: Variable var already exists, disallowed.
Recall that Python namespaces are not TensorFlow namespaces! Changing the Python variable name does not solve the problem. In the next example there stil is a name clash in the Tensorflow computation graph:
import tensorflow as tf v = tf.get_variable("var", [1]) w = tf.get_variable("var", [1]) #Still an ERROR! #=> ValueError: Variable var already exists, disallowed.
Defining a namespace that encapsules one of the varibales is they key to solve this issue:
import tensorflow as tf v = tf.get_variable("var", [1]) print(v.name) #=> var:0 with tf.variable_scope("blah"): v= tf.get_variable("var", [1]) print(v.name) #=> blah/var:0
You can see that in this example there is no error. The reseon for this is that the second "var" variable now lives in the namespace "blah" which makes it adressable under the complete name namespace/name which in this case is blah/var.
Asserts
Tensorflow provides a simple Assert functionality to perform quick checks on varibales and outputs to make sure that no incorrect value is further processed. An Assert can be added anywhere in one single line of code:
import tensorflow as tf assert 1 == 1 # WORKS assert 2 == 1 # ERROR! # => AssertionError
assert tf.get_variable_scope().reuse == False # => AssertionError
0 notes
kevinkatzke · 8 years ago
Text
TensorFlow Basics
Computation Graph
If your aspirations are to define a neural net in Tensorflow, than your workflow would be to first construct the network by defining all computations. Each single computation adds a node to the so called Computation Graph. Providing data to a Session (will come to that later) will ask TensorFlow to executed the giving graph.
TensorFlow comes with a neat build in tool called the TensorFlow Graph visulaization that helps you to keep and insight in what computations is actually defined in a computation graph. A computation graph can get hairy very quickly as one adds many nodes to it, therefore the grpah visualization tool has been implemented which makes it faily easy to understand how the data flows to the graph at any given time.
Session Management
After the computation graph has been defined one has to take care of the Tensorflow Session Management. A Session is neccessary to execute the predefined computation graph. A node in a computation graph has no state before it is evaluated in a Session.
import tensorflow as tf a = tf.constant(1.0) b = tf.constatn(2.0) c = a * b print(c) #=> Tensor("mul:0", shape=(), dtype=float32) with tf.Session() as sess: print(sess.run(c)) print(c.eval()) #=> 30.0 #=> 30.0
The line c = a * b just describes how to Tensorflow constants should be manipulated without actually doing it. To run the computation, the note has to be evaluated in a Tensorflow Session. The same variable can have to completely different values in two different sessions (e.g depending on the specific input values ...).
To make life easy, especially when you are experimenting with Tensoflow in an iPython notebook, Tensorflow comes with the concept of an Interactive Session, which keeps the same Session open by default.This avoids having to keep a variable holding the session.
import tensorflow as tf sess = tf.InteractiveSession() a =tf.Variable(1) a.initializer.run() #No need to refer to sess print(a.eval()) #WORKS #=> 1
One important thing to keep in mind is: "A session may own resources, such as variables, queues, and readers. It is important to release these resources when they are no longer required. To do this, either invoke the close() method on the session, or use the session as a context manager."TF documentation
TensorFlow Variables
In TensorFlow there are two slighltly different concepts of variables. There a constants and variables. The big difference between those to options is that a constant does not neccessariliy be initialized while a variable must be.
Constants
import tensorflow as tf constant_zero = tf.constant(0) # constant with tf.Session() as sess: print(sess.run(constant_zero)) #=> WORKS
Variabels
"When you train a model, you use variables to hold and update parameters. Variables are in-memory buffers containing tensors. They must be explicitly initialized and can be saved to disk during and after training. You can later restore saved values to exercise or analyze the model." (TF documentation)
import tensorflow as tf constant_zero = tf.constant(0) # constant variable_zero = tf.Variable(0) # variable with tf.Session() as sess: print(sess.run(constant_zero)) #=> WORKS print(sess.run(variable_zero)) #=> ERROR! sess.run(tf.global_variables_initializer()) print(sess.run(variable_zero)) #=> WORKS
Note that a variable usually is defined by not only giving it a value but also a name:
variable_zero = tf.Variable(0, name="zero")
The name "zero" is the entity that the variable has been given in the Tensorflow namespace, while variable_zero is the local entity that the variable is being given in the python namespace. When referring to this variable in the Tensorflow computation graph one uses "zero", but on the other hand if one wants to print the variable in the python script one refers to it as variable_zero.
Feeds and Fetches
When a computation graph is defined, there are two different kinds of computations that can be performed on it: Feeds and Fetches. A Feed places data in to the computation graph while a Fetch extracts data from such.
The previously defined operations c.eval() as well as sess.run(c) are both TensorFlow Fetch operataions.
To input data into the computation graph one uses the very simple command called tf.convert_to_tensor():
import tensorflow as tf import numpy as np numpy_var = np.zeros((2,2)) tensor = tf.convert_to_tensor(numpy_var) with tf.Session() as sess: print(tensor.eval()) #=> [[ 0. 0.] # [ 0. 0.]]
It is not possible to evaluate a NumPy array in a Tensorflow session (AttributeError: 'numpy.ndarray' object has no attribute 'eval').
First the NumPy array has to be converted into a Tensorflow Tensor (which automatically creates a TF node that is inserted into the computation graph => Feed operation). The Tensor can the be evaluated in a Tensorflow session which in this case retuns [[ 0. 0.] [ 0. 0.]] as expected.
0 notes
kevinkatzke · 8 years ago
Text
Keypoints and Feature Descriptors in OpenCV
The field of Computer Vision is constantly growing. Not only in the sense of active research but on a much broader level in actual inventions and applications has Computer Vision and Image processing gained a lot of traction in the recent years.
When somebody is interested in developing applications in this area the place to go since almost 10 years is the OpenCV library. With the aim to provide real time computer vision and image processing capabilities OpenCV has become the standart in research, hobby programming and commercial products.
One of the several strengths of OpenCV is the broad build in functionality for so called feature detection, feature description and feature matching. The term 'keypoint' is interchangeably used for 'feature'. Feature detection is critical for applications working on Image stitching. In image stitching one tries to combine multiplie pictures from serveral viewpoints to a single comprehensive image. More recently the same concept is being transferred from 2D to 3D where one tries to stich 3D Point-clouds together instead of plain images.
OpenCV is a huge help in solving such tasks and most recently the developer community tried to make things even easier by introducing common interfaces for feature detectors, feature descriptor extractors and feature matchers.
Especially for developers starting on working in computer vision it can be quite time consuming to read throug all the relevant papers introducing different algorithms and techniques helping to set up a complete pipeline consisting of a feature detector - feature descriptor - feature descriptor matcher. Despite the help of the common interfaces in OpenCV it can be a pain to get the broader picture of what each single algorithm in this field solves regarding the kezpoint pipeline and what it does not. Some algorithms as SIFT can be used for both, feature detection and a feature description while others like FAST only work on the task to compute feature descriptors. I had these exact same issues so that I decided that it could be helpful to list all algorithms and their use cases available in OpenCV in the field of image registration:
1: keypoint detector and descriptor extractor 2: only keypoint detector 3: only keypoint descriptor 4: descriptor extractor 5: descriptor matcher o: free to use x: non free (patented) e: experimental (v3.1.0) n: non-experimental (v3.1.0)
Algorithm Use cases License status AKAZE 1 o n MSER 2 o n BRISK 1 o n ORB 1 o n KAZE 1 o n FAST 2 o n SURF 1 x n SIFT 1 x n FREAK 3 o e BRIEF 4 o e DAISY 3 o e LATCH 3 o e LUCID 3 o e STAR 2 o e MSD 2 o e BF 5 o n FLANN 5 o n
0 notes
kevinkatzke · 9 years ago
Text
Shell Script Toolbox
When you work in image processing you have this repeating problem that you need to preprocess images to check a new feature, to try out a different dataset or algorithm. But renaming and resizing images by hand is a painful and boring task anywhere beyond 3-4 images.
I started a collection of very useful shell scripts for a quick and easy preprocessing of images for different tasks in vision and deep learning.
For all the following tasks I assume that the images are in the .png format. Furthermore I always save the processed images in a new folder instead of overwriting the original ones, just in case. Therefore please create a folder and replace /folder by the folder name of your choice that you created in the following scripts.
I always assume, that you want to process all the images in the current folder!
Just navigate with your terminal to the folder in which your images a located. Then type in the following scripts and hit enter. (Some of the scripts may take up to a couple minutes to execute, depending on the operation and the number of images that are processed.)
Transform RGB images to one channel grayscale:
for a in *.png; do convert "$a" -set colorspace gray -separate -average folder/"$a"; done
Resize images by percentage in width and height:
for a in *.png; do convert "$a" -resize 50% folder/"$a"; done
Resize images to specific size:
for a in *.png; do convert "$a" -resize 640x270 folder/"$a"; done
Rename images in consecutive order:
The next script is relatively long, therefore I have it saved in a file called rename.sh:
#!/bin/ksh a=1 for i in *.png; do new=$(printf "$a.png") mv -i -- "$i" "$new" a=$(($a+1)) done
Place the file in the same folder in which the images are and execute it via terminal by running:
sh rename.sh
Crop a specific region out of an image:
Open the first image in Gimp. Now open the rectangle selection and select the part of the image that you want to crop out. In the main toolbox you can now find the following information: - Position (x,y) - Size (width,height)
Now replace these values in the following shell script:
for a in *.png; do convert "$a" -crop WIDTHxHEIGHT+X+Y folder/"$a"; done
Here is an example of the code:
for a in *.png; do convert "$a" -crop 530x460+7+13 cropped/"$a"; done
0 notes
kevinkatzke · 9 years ago
Text
Mat Datatypes in OpenCV
I spend most of my OpenCV debugging time on falsy datatypes causing all kinds of different errors, from Segmentation fault to very strange image outputs. Therefore I searched on the internet for complete list of OpenCV datatypes that works as a quick reference. The documentation I found was poor so that I decided to write a short overview myself. The Datatype are named by the following structure: 16UC3 = 16 Bit unsigned short 3 channels 32FC1 = 32 Bit Float 1 channel Unsigned 8bits uchar 0~255 Mat: CV_8UC1, CV_8UC2, CV_8UC3, CV_8UC4 Signed 8bits char -128~127 Mat: CV_8SC1, CV_8SC2, CV_8SC3, CV_8SC4 Unsigned 16bits ushort 0~65535 Mat: CV_16UC1, CV_16UC2, CV_16UC3, CV_16UC4 Signed 16bits short -32768~32767 Mat: CV_16SC1, CV_16SC2, CV_16SC3, CV_16SC4 Signed 32bits int -2147483648~2147483647 Mat: CV_32SC1, CV_32SC2, CV_32SC3, CV_32SC4 Float 32bits float -1.1810^-38~3.4010^38 Mat: CV_32FC1, CV_32FC2, CV_32FC3, CV_32FC4 Double 64bits double Mat: CV_64FC1, CV_64FC2, CV_64FC3, CV_64FC4 To initalize a Mat object in OpenCV write:
//Mat myMat(rows, cols, Datatype); Mat myMat(2,2, CV_32FC3)
To acces a Pixel in a Mat write:
//myMat.at(row, col); myMat.at<int>(y, x); myMat.at<float>(y, x); myMat.at<double>(y, x);
More information can be found here:
OpenCV: Mat - The Basic Image Container
OpenCV: Operations with images
0 notes
kevinkatzke · 9 years ago
Text
Setting up Code::Blocks and PCL on Linux
To kick start the upcoming blog posts about cloud point processing we start right away by setting up our programming environment. I will here describe the process of setting up the Point Cloud Library (PCL) using the Open Source Editor called [Code::Blocks](http://www.codeblocks.org/). Code::Blocks is widely used free, open source and cross platform IDE. Unless you have a good reason why to use a different IDE you should set it up as described to make this as simple as possible. Let me notice, that I will use Linux as my Operating System for the PCL Tutorials. Following on OSX shouldn’t be a problem, Windows could be partially supported only but there are always simple workarounds which I will try to point out as they come up. ## Installing Code::Blocks on Linux The simplest way to get Code::Blocks up and running is to use your Linux Software Center. Fire it up, search for Code::Blocks and hit install. You should then have a running version in less then 3 minutes. If you cannot or don’t want to install through the Software center you’ll find everything you need [here](http://wiki.codeblocks.org/index.php/Installing_Code::Blocks)). Now that you have Code::Blocks installed open it and create a new empty file. Choose it to be a ‘Terminal application’ and set C++ as the language. Set a name for the file and click on finish. Now it is important that we check for a common bug in Code::Blocks on the Linux OS. Insert the following code to the newly created file, save it and click the Button ‘Compile and run’ in the Toolbar:
#include <iostream> int main(){ std::cout << "Hello world!" << std::endl; return 0; }
If a Terminal window pops up displaying “Hello world!” everything is fine and you are all set. But if Code::Blocks instead displays the following error: "Process terminated with status 255".
Tumblr media
This means that the default program that the IDE wants to use to run console programs is something that is missing on your machine. To solve this you just need to change the Code::Blocks default. Go to Settings -> Environment and in the new window that this opens up choose "gnome-terminal --disable-factory -t $TITLE -x") from the dropdown on the bottom that says Terminal to launch console programs. After you did this click again on “Build and run” and Code::Blocks should fire up a terminal window displaying the correct output. (Note that it may be the case that gnome-terminal is not the default terminal on your machine. In this case select the correct one in the dropdown to make everything work.)
Tumblr media
Installing the Point cloud Library
Now that you have a running IDE we need to download and install the PCL Library and link to Code::Blocks as Compiler and Linker directories.
The first step is to download the PCL Libray and to do this follow the instruction and their website: PCL - Linux Downloads. (For Windows and OSX please visit this link: PCL - Downloads)
PCL should be downloaded and installed at this point, either with the help of the prebuilt binaries through the apt package manager (Linux) or an alternative. Next we have to add the PCL library the Code::Blocks in the form of Search Directories and Link Libraries so that the Compiler and the Linker know where to look at when we are using PCL related Code in our projects.
Therefore go to: Settings -> Compiler... the select the tab Linker settings and add the following Libraries:
Note: the following paths may not be the right paths on your machine! But every .so file and every directory should be somewhere on your machine if you have followed the tutorial so far.
Note:When the following changes are made to Settings -> Compiler... as described, the resources will be available in any any Code::Blocks project as they will be global. To restrict this to a specific project only add changes to: Project -> Build options...
We here assume that the Boost Library is already installed and available on your machine. To check if Boost is installed click here. If it is not installed yet, follow these steps: Install Boost
/usr/lib/libvtkalglib.so /usr/lib/libvtkCharts.so /usr/lib/libvtkCommon.so /usr/lib/libvtkDICOMParser.so /usr/lib/libvtkexoIIc.so /usr/lib/libvtkFiltering.so /usr/lib/libvtkftgl.so /usr/lib/libvtkGenericFiltering.so /usr/lib/libvtkGeovis.so /usr/lib/libvtkGraphics.so /usr/lib/libvtkHybrid.so /usr/lib/libvtkImaging.so /usr/lib/libvtkInfovis.so /usr/lib/libvtkIO.so /usr/lib/libvtkmetaio.so /usr/lib/libvtkParallel.so /usr/lib/libvtkproj4.so /usr/lib/libvtkQtChart.so /usr/lib/libvtkRendering.so /usr/lib/libvtksys.so /usr/lib/libvtkverdict.so /usr/lib/libvtkViews.so /usr/lib/libvtkVolumeRendering.so /usr/lib/libvtkWidgets.so /usr/lib/libpcl_apps.so /usr/lib/libpcl_common.so /usr/lib/libpcl_features.so /usr/lib/libpcl_filters.so /usr/lib/libpcl_io.so /usr/lib/libpcl_io_ply.so /usr/lib/libpcl_kdtree.so /usr/lib/libpcl_keypoints.so /usr/lib/libpcl_octree.so /usr/lib/libpcl_outofcore.so /usr/lib/libpcl_people.so /usr/lib/libpcl_recognition.so /usr/lib/libpcl_registration.so /usr/lib/libpcl_sample_consensus.so /usr/lib/libpcl_search.so /usr/lib/libpcl_segmentation.so /usr/lib/libpcl_surface.so /usr/lib/libpcl_tracking.so /usr/lib/libpcl_visualization.so /usr/lib/x86_64-linux-gnu/libboost_thread.so /usr/lib/x86_64-linux-gnu/libpthread.so /usr/lib/x86_64-linux-gnu/libboost_filesystem.so /usr/lib/x86_64-linux-gnu/libboost_iostreams.so /usr/lib/x86_64-linux-gnu/libboost_system.so
Next change to the tab Search directories and add the following to the list:
/usr/include/eigen3 /usr/include/pcl-1.7 /usr/include/vtk-5.8 /usr/include/pcl-1.7/pcl/surface
Here is a screenshot of the Settings window where the changes need to be made:
Tumblr media
After you have added all these files to the Code::Blocks settings your machine should be all set to run Point Cloud Library Code. To make a test if everything works as expected, open a new file in Code::Blocks and enter the following Code from the PCL - Documentation:
#include <iostream> #include <pcl/io/pcd_io.h> #include <pcl/point_types.h> int main (int argc, char** argv) { pcl::PointCloud<pcl::PointXYZ> cloud; // Fill in the cloud data cloud.width = 5; cloud.height = 1; cloud.is_dense = false; cloud.points.resize (cloud.width * cloud.height); for (size_t i = 0; i < cloud.points.size (); ++i) { cloud.points[i].x = 1024 * rand () / (RAND_MAX + 1.0f); cloud.points[i].y = 1024 * rand () / (RAND_MAX + 1.0f); cloud.points[i].z = 1024 * rand () / (RAND_MAX + 1.0f); } pcl::io::savePCDFileASCII ("test_pcd.pcd", cloud); std::cerr << "Saved " << cloud.points.size () << "data points to test_pcd.pcd." << std::endl; for (size_t i = 0; i < cloud.points.size (); ++i) std::cerr << " " << cloud.points[i].x << " " << cloud.points[i].y << " " << cloud.points[i].z << std::endl; return (0); }
A terminal window should pop up and you should see something similar to:
Tumblr media
0 notes
kevinkatzke · 9 years ago
Text
EV3 leJOS Mapping
If you’re developing LeJOS applications there might come the point where you want to create a Map of the area your robot is moving in. Such maps can be used to move your robot around or to perform localization (e.g Monte Carlo Localization). LeJOS comes with build in PC-Tools which the so called ev3MapCommand is part of. This is a very useful GUI-application which allows you to upload SVG maps to your robot.
Tumblr media
To start ev3MapCommand you need to change to your local lejos directory using your terminal (Please note that your lejos installation might be located on a different path!):
#Change into LeJOS's /bin directory cd Documents/leJOS_EV3_0.9.0-beta/bin
Within the /bin folder that you’ve accessed are all the PC-Tools executables located that are included in LeJOS. Now you can start the ev3MapComand simply by running the following command:
#open the GUI App open ev3mapcommand
(Note that you need to have Java installed to do so. If you are not sure if you have a local Java installation, run: java -version)
After running the last command the ev3MapCommand GUI-Tool should pop up. If it is not, you should check the output of the terminal window that fires up along with the ev3MapCommand GUI:
Tumblr media
Additionally check if the executable is correctly located using your OSX Finder:
Tumblr media
For further reading check the following resources:
LeJOS PC-Tools
LeJOS HTTYR: Map Command
Error when using the EV3mapcommand
0 notes
kevinkatzke · 9 years ago
Text
Safari Pop-Up Spam
I consider myself being experienced enough to deal with any kind of Pop-Up Spam the internet has to offer and I was sure I have seen them all. But a few days ago I was convinced that there is more evil out there then I ever thought. 
While visiting a website that makes a lot of money hosting a huge bulk of endless advertisements, I made an interesting experience. A new tab was opened in which a Pop-Up showed up that I tried to close by clicking the cancel button. As I expected a new Pop-Up showed up but Safari gave my the opportunity to check a checkbox to block any further Pop-Ups from that page. I  clicked Cancel one more time and suddenly 15-20 new tabs where opened and all of them showed me the same Pop-Up Message saying that my computer has Spy- or Adware on it and I should immediately call a telephone number displayed in the alert:
Tumblr media
Closing that first Pop-Up wasn’t as effective as I expected and I started closing all those Pop-Up, but then the evil showed its ugly face... Closing a single Pop-Up opened at least 3 more. In that moment I realized 2 important facts that dazed me: 
1) A few weeks ago I disabled Pop-Up-Blocking in my Safari Settings for a project I was working on and I forgot to change that setting back (Never do that :/ ) 
2) In my Safari Settings I have chosen to save my session when I close my browser, allowing me to have all my 200 tabs back open for when I restart Safari the next day. (Yes, I need all of these tabs and yes I will read those websites and some point and close the tab when I’m done -.-)
So just to be sure I’m really stuck in that JavaScript Pop-Up Loop I closed Safari and reopened it. Aaaannndd...... Hello back again you damn f***ing Pop-Up disaster :( So what do you do, when you cannot change any Safari Setting because a Pop-Up is opened and you a forced to close it, what is obviously impossible to rechange your Settings to “Block Pop-Ups”? Right, you ask Google and Google asks StackOverflow.
So I read that unlike other browsers in OS X, Safari hosts pages in separate running processes on your Mac. What means that by accessing your Mac’s Activity monitor you should be able to force quit any of these open tabs:
Tumblr media
So I clicked on one of processes and hit the X-Button on the top left to force quit a process. First I thought this would solve the problem but it turned out to be as ineffective as my first attempt, as the Pop-Ups reopened fast enough so that I had no chance to close all of them. I was kind of shocked that I was really stuck in JavaScript Pop-Up Loop and I had no clue how to fix that. So what’s next? My first idea was to delete the physical Data holding Safaris Saved State which is looked up each time Safari starts to rebuild all these extremely important Tabs. By doing this not only my loved Tabs would be gone but with them I would have the chance to overcome the Pop-Up misery. I was already looking up the “com.apple.Safari.savedState.” file in my system to finish all this when I suddenly remembered that there is a shortkey for this! 
When you force quit Safari and then hold down the SHIFT-Button while launching Safari it will start completely blank. The option prevents Safari from launching at the state you closed it before. So I tried this and it worked! :D No more Pop-Ups telling me that I’m an idiot. Damn I lost 30 minutes on this and you know how this is, such things never happen in your spare time but always when you’re in a hurry anyways.
And you know what? I loved my blank Safari window. For the first time in 5 years or so I had just a single tab open and it awesome. It almost felt like had finally read all those super important articles and learned all that amazing stuff I always wanted to know. 
Edit: It’s 5 days after day zero. While browsing the internet I found amazing articles, tutorials, videos, frameworks, which I will work though as soon as possible. The relating Tabs are open, I’ll keep you updated!
0 notes
kevinkatzke · 9 years ago
Text
Setting up Octave for OS X Terminal
I started taking the Machine Learning class on coursera and so far it is awesome! In Week 2 the students get introduced to Matlab and Octave and while Matlab is super smooth using, Octave is unfortunately not. It is not Octave itself I’m talking about, it is their Octave-GUI.
The instructor Andrew Ng uses his Windows command line during the course but following this using the provided Octave-GUI is strange.
Intuitively I tried to navigate my input using the left and right arrow key but the GUI doesn’t support this. The same counts for the up/down arrow keys, which I expected to let me browse my command history as we all know it from our terminal or command line. To put it in a nutshell, the user experience is not optimal. 
The problem is known and bug reports can be found here and here.
So after spending some time looking for a simple way to use Octave with my OS X Terminal I decided to publish a short tutorial on how to set this up.
Open your terminal and and type in the following:
# install Homebrew http://brew.sh/ if you don't already have it installed ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" # Next, tap the science formulae brew tap homebrew/science # Check for updates (skip this if you just installed homebrew) brew update && brew upgrade # install gfortran brew install gfortran # !!! If you just got the following alert: # 'GNU Fortran is now provided as part of GCC, and can be installed with:brew install gcc' # Go ahead and do so brew install gcc # install octave (this can take a few minutes) brew install octave # install gnuplot brew install gnuplot --with-qt
Now open a new terminal window and navigate to your home directory. Here you need to create a new file called .octaverc and open it by executing the following commands:
cd ~ touch .octaverc open .octaverc
You can now specify global Octave setting by typing them in the open editor. First wir set the gnuplot terminal to qt. After that we want the Octave prompt to display only >> on each new line instead of octave-Version.exe which looks much nicer:
setenv ("GNUTERM", "qt") # below is optional; changes the prompt to two chevron # and gets rid of the long Octave x.x.x >> prompt PS1('❯❯ ')
There are some blog posts out there to set up Octave for OS X Terminal, but the main obstacle for me was that all of them are using X11 as a Gnuplot terminal which was not working on my machine. When I changed this to qt I suddendly was able to generate plots of my functions.
You can now go on and test your Octave environment but by opening a new terminal window and entering the following commands:
## don't type in the extra chevrons. They should automatically show up when you've added the line PS1('❯❯ ') to your .octaverc as mentioned above. >> t = [0:0.01:0.98]; >> y = sin(2*pi*4*t); >> plot(t,y)
Tumblr media
By executing the plot() function a new window should open up showing you this beautiful graph:
Tumblr media
Let me know if I forgot something or if you found mistakes in my explenation. Have fun using Octave on your terminal and keep up with the Machine Learning class :)
Extra information / further reading can be found here:
Can't plot with Gnuplot on my Mac
How to install Octave
Mac Setup - Octave
Octave plottig error
0 notes