My name is Raúl Roa. I am currently a student at The Guildhall @ SMU's Masters for Interactive Technology program, specializing in Software Development.I am an experienced software developer, particularly using Microsoft's .NET stack (7+ years) & C++. My interests range from math related problems, to Game/Web development. Outside of programming I enjoy Association football, Chess, and sports in general. I like dancing salsa and listening to music while attempting to understand the meaning of life :)~
Don't wanna be here? Send us removal request.
Text
Let us Game!
Introduction
This is the fifth article of an ongoing series about game development for the Web. The intention behind the two previous articles is to provide you with the minimum (but solid) tools to start developing two-dimensional games using the Canvas 2D Context API.
Up until now, we have learnt about key functions to draw simple shapes on the canvas element as well as functions to transform those shapes without having to update, manually, their location, orientation or scaling factors. It is time for us to dive deeper into the game development world and it is time to put to use what we have learned so far about the Canvas 2D Context API.
What is a game?
We all know at heart what a game is, but when it comes to putting it into words we may find it difficult since the term encompasses a wide range of activities that fall under our intuitive notion of what game is.
In his book, A theory of Fun for Game Design, Raph Koster defines a game to be an interactive experience that provides the player with an increasingly challenging sequence of patterns which he or she learns and eventually masters. On the other hand, Jason Gregory, in his book, Game Engine Architecture, reflects on this assertion adding that the activities of learning and mastering are at the heart of what we call “fun”, just as a joke becomes funny at the moment we “get it” by recognizing the pattern.
The definition of the word game is very broad and many other authors have different assertions with other types of validity. For the purposes of this series of posts, we will focus on the subset comprised by two-dimensional and three-dimensional virtual worlds were the player or players control a character or characters under a pre-determined set of rules using a computer. If you would like to expand on this topic, Gerard.IGN has a nice article about the term “video game” and its inadequacy.
Now that we got that out of the way, it is time for us to get into the technical details of how to create video games.
The Game Loop
Games, unlike other applications, need to be updating and processing data even if there is no user input. Real-time rendering is concerned with making images rapidly on the computer. An image appears on the screen, the viewer acts or reacts, and this feedback affects what is generated next. This cycle of reaction and rendering, happens at a rapid enough rate that the viewer does not see individual images, but rather becomes immersed in a dynamic process.
In order to achieve this, a series of operations need to run cyclically in order to interpret input, update and render the game to the player. The game loop is the heartbeat of the game, and like a real heart, it commonly runs at a fixed time lapse. The rate at which images are displayed is measured in frames per second (fps) or Hertz (Hz). There is a lot of discussion about what is the most suitable rate for the sense of interactivity. Displaying images at a rate of 15 fps should be enough to transmit that sense of interactivity. Obviously the higher the rate at which the images are displayed provides a more smooth and immersion, but the truth is that, after approximately 70 fps the difference is unperceivable. That said; games usually display images at a rate of 30 fps or 60 fps.
As described above, a naïve implementation of the game loop processes player input, simulates the game world to react to that input, and then renders the scene. In a more advanced implementation of the main loop many sub-systems service periodically and independently, such as Artificial Intelligence, Sound, Networking, Collision Detection, etc. Since the time per iteration is fixed, all these systems must run their routines extremely fast. Hence, the endless interest of optimizing the way code runs from game developers. Faster executing code within, your main loop, is equivalent to smoother gameplay, period.
Implementing the main loop
// Naive implementation of the main loop while( true ) { // Input // Update Simulation // Render } The main loop is commonly expressed in code as an infinite loop.
The above stated implementation of the main loop executes sequentially the minimum routines to have a game. There is a problem with it though, it will execute as fast as it can. Taking into account that in an iteration of the loop an image is generated, the amount of images generated in a second will vary per hardware platform. This issue can be solved by using time and constraining the amount of iterations per second. This will allow us to control the amount of frames that are rendered in a second. In addition, it will guarantee that the user experience will be consistent across different hardware platforms.
Example:
// Test Management var noFrameLimitTestCounter = 0; var AMOUNT_OF_TESTS = 10; // Let's add a spinlock to wait until its time to update! var DESIRED_FRAME_RATE = 60.0; var DESIRED_SECONDS_PER_FRAME = ( 1.0 / DESIRED_FRAME_RATE ); var DESIRED_MILLISECONDS_PER_FRAME = DESIRED_SECONDS_PER_FRAME * 1000; console.log("Desired Frame Rate:" + DESIRED_FRAME_RATE); // Time Management var nextUpdate = new Date().getTime(); var initTime = new Date().getTime(); // Test Management var frameLimitTestCounter = 0; while(frameLimitTestCounter < AMOUNT_OF_TESTS) { while(new Date().getTime() >= nextUpdate) // let's wait to update! { nextUpdate = new Date().getTime() + DESIRED_MILLISECONDS_PER_FRAME; amountOfUpdates++; } elapsedTime = new Date().getTime() - initTime; if( elapsedTime >= 1000 ) { frameLimitTestCounter++; console.log("Test #" + frameLimitTestCounter + " " + (amountOfUpdates) + " updates in a second."); // Reset variables for next test elapsedTime = 0; amountOfUpdates = 0; initTime = new Date().getTime(); } }
Since in this series we are targeting the Web, as a platform, some considerations need to be made about how we build our main loop. Browsers, themselves, implement a main loop to render Web pages and repaint their windows at a fixed frame rate.
Implementing infinite loops in JavaScript, like the ones stated above, will block the browser's UI thread. This will make your browser unresponsive since it cannot repaint the window; therefore, unless we are using an independent platform like node.js to execute our JavaScript code, we should avoid them. Because of this, the browser provides window interfaces to call functions at a fixed amount of time from JavaScript. These tools allow us to create a main loop with a fixed frame rate without having to implement the time handling ourselves.
window.SetInterval
The purpose of this API method is to call a function or execute a code snippet repeatedly, with a fixed time delay between each call to that function. You can see the official documentation here.
Example:
// Test Management var noFrameLimitTestCounter = 0; var AMOUNT_OF_TESTS = 10; // Time Management var amountOfUpdates = 0; var setIntervalTestCounter = 0; var time = undefined; var lastFireTime = undefined; var initTime = new Date().getTime(); var myInterval = setInterval(function(){ time = Date.now(); if (lastFireTime !== undefined) { // Returns time it taken between function calls //console.log(time - lastFireTime); } lastFireTime = time; amountOfUpdates++; elapsedTime = new Date().getTime() - initTime; if(elapsedTime >= 1000) { setIntervalTestCounter++; console.log("Test #" + setIntervalTestCounter + " " + (amountOfUpdates) + " updates in a second."); // Reset variables for next test elapsedTime = 0; amountOfUpdates = 0; initTime = new Date().getTime(); } if(setIntervalTestCounter >= AMOUNT_OF_TESTS) { window.clearInterval(myInterval); } }, DESIRED_MILLISECONDS_PER_FRAME);
window.requestAnimationFrame
This is one of the new additions as a consequence of the interactivity that HTML5 provides. The purpose of this API method is to tell the browser that you wish to perform an animation. Then, it requests the browser to call a specified function to update an animation before the next repaint. Here is a link to the official documentation.
Example:
(function() { var lastTime = 0; var vendors = ['ms', 'moz', 'webkit', 'o']; for(var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) { window.requestAnimationFrame = window[vendors[x]+'RequestAnimationFrame']; window.cancelRequestAnimationFrame = window[vendors[x]+ 'CancelRequestAnimationFrame']; } if (!window.requestAnimationFrame) window.requestAnimationFrame = function(callback, element) { var currTime = new Date().getTime(); var timeToCall = Math.max(0, 16 - (currTime - lastTime)); var id = window.setTimeout(function() { callback(currTime + timeToCall); }, timeToCall); lastTime = currTime + timeToCall; return id; }; if (!window.cancelAnimationFrame) window.cancelAnimationFrame = function(id) { clearTimeout(id); }; }()); var myReq = undefined; var requestAnimationTestCounter = 0; // Test Management var noFrameLimitTestCounter = 0; var AMOUNT_OF_TESTS = 10; // Time Management var initTime = new Date().getTime(); var elapsedTime = 0; var amountOfUpdates = 0; function step() { amountOfUpdates++; elapsedTime = new Date().getTime() - initTime; if(elapsedTime >= 1000) { requestAnimationTestCounter++; console.log("Test #" + requestAnimationTestCounter + " " + (amountOfUpdates) + " updates in a second."); // Reset variables for next test elapsedTime = 0; amountOfUpdates = 0; initTime = new Date().getTime(); } if(requestAnimationTestCounter >= AMOUNT_OF_TESTS) { window.cancelAnimationFrame(myReq); } else { requestAnimationFrame(step); } } myReq = requestAnimationFrame(step);
Working with Real-Time Rendering is a little bit different when targeting the Web. We cannot control how HTML elements are rendered by the browser and in some cases we hand over the responsibility to the canvas element itself using the context API. It is very important that we remember this, since some of our decisions moving forward might feel odd when compared to traditional game development.
Double Buffering
Double buffering is a technique of drawing an image to a memory buffer before displaying it to the screen in order to avoid flickering, tearing, and other artifacts.
The pixels that comprise the image are written to an array located in the RAM called the framebuffer. Regardless of how, the ultimate purpose of our game code is to produce the pixel values that ultimately end up being placed in the framebuffer. Similar to the problems related to modifying an array while iterating through its contents, modifying the framebuffer while the pixel values are being rendered to the screen might result in undesired images or behaviors while drawing them. Therefore, a common thing is to drawing to another memory location called the “back buffer” and swapping it the framebuffer.
Demonstration of front & back buffer swaps.
Double buffering is a very common technique for real-time rendering of games, simulations and animated movies. The canvas element implements double buffering by default as long as we are using the 2D Context API to render primitives and images. When we implement animations, it's ideal to take the matter into our hands, resulting in a back buffer clear at the beginning of every rendering call and a swap of the buffers at the end. Many other graphics libraries implement double buffering as well and provide methods and interfaces to perform the swap for us.
Conclusion
In this article, we discussed the concept of a game; we talked about the heart of games and a common technique to display images on screen. Further, in this series, we will implement the concepts in a full-fledged example.
Bibliography
[1] J. Gregory, Game Engine Architecture, 1st ed., Boca Raton: A K Peters/CRC Press, 2009.
[2] R. Koster, A Theory of Fun for Game Design, 2nd ed., O'Reilly Media, 2013.
[3] M. McShaffry and D. Graham, Game Coding Complete, 4th ed., Cengage Learning PTR, 2012.
[4] T. Akenine-Moller, E. Haines and N. Hoffman, Real-Time Rendering, 3rd ed., Boca Raton: CRC Press, 2008.
0 notes
Text
Packt's 2000th Title Offer
If you have been following along, you will notice that lately I have done some book reviews for Packt Publishing.
Packt is one of the most prolific and fast-growing tech book publishers in the world. And now, they have a campaign to coincide with the release of their 2000th title. If you buy one book you get one for free.
This is a good opportunity to invest in books that might help with your professional education.
You can find more information about the offer here.
What books will you get? Let me know in the comments down bellow.
0 notes
Text
Learning Game Physics with Bullet Physics and OpenGL: Review

Bullet is an open-source physics engine which features 3D collision detection for soft and rigid body dynamics. It is relevant because it is integrated to many 3D modelers such as Maya and Blender. In addition, it is used in video games and in visual effects in movies.
Learning Game Physics with Bullet Physics and OpenGL, attempts to be a reference when it comes to implementing the Bullet physics engine using OpenGL. Physics simulation is a daunting topic, technically and theoretically, and bullet aims to aid the aspects involved in this type of development.
The book targets beginner and intermediate programmers with a basic understanding of linear algebra. It does a great job building up from the setup of a simple OpenGL application (using FreeGLUT) to the actual implementation of the physics engine. It is written as if every chapter is a self-contained tutorial.
I liked the pace and the way the author described every little aspect of how things work under the hood. However, the OpenGL rendering examples are done using the old fixed-pipeline and I would have wished to see the examples using the programmable one.
Learning Game Physics with Bullet Physics and OpenGL is a great introductory book for game developers who are looking to jump into soft and rigid body dynamics implementations and would like to have a sandbox before implementing their own.
If you are starting out with game development and would like to build a sandbox to understand the underlying theory behind soft and rigid body dynamics you should grab a copy.
If you have any further comments about this book or have suggestions about other books covering this topic please share them on the comments down below.
0 notes
Text
GLSL Essentials: Review

Shader programming has been the largest revolution in graphics programming. The OpenGL Shading Language, is a high-level shading language based on the syntax of the C programming language. With GLSL you can execute code on your graphics card and add more sophisticated effects to your renders.
GLSL Essentials attemps to be the definitive book that teaches you about shaders from the very beginning. Going through graphics programming evolution and describing in depth each stage in the Graphics Rendering Pipeline. The book explains how shaders work in a step-by-step manner, with an explanation of how they interact with the application assets at each stage.
The book is intended for people with some experience in application programming interfaces (API) for rendering 2D and 3D computer graphics, who are willing to update their knowledge to the newest OpenGL version practices.
The author assumes some familiarity with OpenGL and C/C++ as the language for host applications, which helps to cut down on a lot of introductory boilerplate. It's best to start reading from the beginning, as examples build on information from previous chapters.
The examples are short but long enough to describe the power of the technology at hand. I find this book useful for people trying to make the jump from the fixed pipeline paradigm to the new programmable pipeline.
All in all I think it's a good resource for anyone who wants to jump into Graphics Programming using the "modern" OpenGL approach.
Conclusion
I believe that the author, Jacobo Rodríguez, does a great job describing the programmable graphics pipeline and makes a swift transition from the basic theoretical concepts to practical demonstrations of what role each type of shader plays in it.
It's a short and rewarding experience that any Computer Graphics programmer should embark in order to have a grasp on the new OpenGL practices and the current state of their shading language.
Grab a copy and let me know what YOU think on the comments down below.
1 note
·
View note
Text
Transforms
Introduction
This is the fourth article of an ongoing series about game development for the Web. In the previous article, we discussed the Canvas 2D Context API interface and went over the functions in charge of displaying primitives and text.
In this delivery, we will focus on the set of functions meant to alter the drawn geometry or text location and orientation within the Canvas Element.
The Math
I have created an article that goes over some math concepts relevant to the topics discussed in this article. This is not intended to be the definitive math guide, but at least provides an overview of the magic going on behind the scenes for the API functions discussed here.
Transforms & the Canvas 2D Context API
The Canvas 2D Context API, like many other 2-dimensional APIs, has support for standard transformations like translation, rotation and scaling. This allows us to draw transformed elements around the screen without having to do the calculations of the new points by hand. If needed you can also combine transforms by calling them in order. On the Canvas 2D Context API transformations work by first transforming the canvas context, and then drawing onto it.
Translate Transform
To translate the Canvas 2D Context, we use the translate() interface method of the API. Translations enable us to move entire pieces of the canvas with just one call.
translate(x, y ) - remaps the (0,0) origin on a canvas. You can specify one or both parameters.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); var rectWidth = 150; var rectHeight = 75; // translate context to center of canvas ctx.translate(canvas.width / 2, canvas.height / 2); ctx.fillStyle = 'blue'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight);
Rotate Transform
To rotate the Canvas 2D Context, we use the rotate() interface method of the API. The rotate transformation requires an angle in radians. To define the rotation point, we need to first translate the canvas context such that the top left corner of the context lies on the desired rotation point.
rotate(angle) - rotates the context based on the current origin. The rotation angle must be given in radians.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); var rectWidth = 150; var rectHeight = 75; // translate context to center of canvas ctx.translate(canvas.width / 2, canvas.height / 2); // rotate 45 degrees clockwise ctx.rotate(Math.PI / 4); ctx.fillStyle = 'blue'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight);
Scale Transform
To scale the Canvas 2D Context, we use the scale() transform method of the API. The method only requires the x and y scaling factors as the parameters.
scale(x, y) - scales the context based on the specified scaling factors. You can specify one or both parameters.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); var rectWidth = 150; var rectHeight = 75; // translate context to center of canvas ctx.translate(canvas.width / 2, canvas.height / 2); // scale y component ctx.scale(1, 0.5); ctx.fillStyle = 'blue'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight);
Custom Transform
To apply a custom transformation matrix to the Canvas 2D Context, we use the transform() method of the API. This method follows a convention defined on the SVGMatrix interface, where the 3x3 transformation matrix components are laid out as follows:
transform(a, b, c, d, e, f)
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); var rectWidth = 150; var rectHeight = 75; // translation matrix: // 1 0 tx // 0 1 ty // 0 0 1 var tx = canvas.width / 2; var ty = canvas.height / 2; // translate context to center of canvas ctx.transform(1, 0, 0, 1, tx, ty); ctx.fillStyle = 'blue'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight);
State Management
When drawing to the Canvas using its 2D Context API, the 2D Context itself is in a certain state. Every time we manipulate the context by using a drawing method or applying a transformation, we modify the context’s state.
We can think of the canvas drawing state as a snapshot of all the styles and transformations that we have applied. The Canvas modifications persisted per state are:
The transformations such as translate, rotate and scale etc.
The current clipping region.
The current values of the following attributes: strokeStyle, fillStyle, globalAlpha, lineWidth, lineCap, lineJoin, miterLimit, shadowOffsetX, shadowOffsetY, shadowBlur, shadowColor, globalCompositeOperation, font, textAlign, textBaseline.
Often, we need to transform independently several elements at once while drawing to the Canvas. For instance, we might want to have rectangles with different stroke styles or orientations. The 2D Context API provides a state stack, which allows easy switching between states depending on our needs. This is a quick way of resuming earlier states after temporary changes.
In order to push and pop states from the Canvas 2D Context we need to use the following methods:
save(), pushes the current state to the stack.
restore(), pops the top state from the stack and sets it as the current state of the context.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); var rectWidth = 150; var rectHeight = 75; ctx.save(); // save state 1 ctx.translate(canvas.width / 2, canvas.height / 2); ctx.save(); // save state 2 ctx.rotate(Math.PI / 4); ctx.save(); // save state 3 ctx.scale(2, 2); ctx.fillStyle = 'blue'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight); ctx.restore(); // restore state 3 ctx.fillStyle = 'red'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight); ctx.restore(); // restore state 2 ctx.fillStyle = 'yellow'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight); ctx.restore(); // restore state 1 ctx.fillStyle = 'green'; ctx.fillRect(rectWidth / -2, rectHeight / -2, rectWidth, rectHeight);
Examples
The source code for these examples is hosted on GitHub and you can download it and do whatever you want with it. Likewise, the live demos can be accessed by clicking this link.
Conclusion
In this article we took a closer look to the set of methods meant to alter the Canvas Context orientation and location. We took a look into the math behind the transformation methods and some basic concepts which will be the base of other development techniques often used in simple and complex games.
Moving forward in the series, we will take what we already know about the CanvasRenderingContext2D API and put them together to some basic 2D games.
References
Weisstein, Eric W. "Scalar." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Scalar.html
Weisstein, Eric W. "Linear Algebra." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/LinearAlgebra.html
Weisstein, Eric W. "Affine Transformation." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/AffineTransformation.html
1 note
·
View note
Text
Canvas Rendering Context 2D
Introduction
This is the third article of an ongoing series about game development for the Web. In the previous article, we saw a general overview of the graphics pipeline and briefly discussed the rendering context. It is necessary to understand these notions moving forward in this series. From this point on, content will be discussed under that assumption.
In this delivery, we will take a deeper look into the HTML Canvas Element and the HTML Canvas 2D Context.
What is the Canvas Element again?
In the first article of the series, we defined the Canvas Element as “a drawing region to display programmable graphics in a Web Page”. It is necessary to emphasize that the Canvas Element is just a medium and not an API, the rendering context will define the API with which the content will be drawn.
The Canvas Element is not the only medium to draw graphics on the Web; other options exist such as SVG, and direct DOM Animations, each of which differ from the other and have their strengths and weaknesses. For example, the Canvas Element is resolution-dependent, this means that the output will be displayed only at certain predetermined resolutions, in contrast to SVG, that is resolution-independent, which means that will output content at the highest resolution possible. In addition, SVG uses vector graphics instead of pixels like the Canvas Element. One of the most notable advantages of using SVG is that vector-based graphics can be scaled by any amount without degrading the quality.
You might ask yourself why we are not using SVG to develop games in the browser. The answer to this question is simple, performance & flexibility. SVG is known as a retained mode graphics model, which means that it retains a complete model of the objects to be rendered. The Canvas Element in the other hand is known as an immediate mode graphics model, which means that it must re-issue all drawing commands required to describe the objects to be rendered per frame. The later provides the maximum amount of control and flexibility to the application program, which makes it ideal for video game development. Learn about rendering modes here.
Moreover, if you feel like exploring in more depth the differences between SVG and the Canvas Element I would recommend reading this article.
HTML Canvas 2D Context
The 2D Context is an API that provides objects, methods, and properties to draw and manipulate graphics on a canvas drawing surface in a two dimensional space. Each canvas has its own context, so if a page contains multiple canvas elements a reference to each context must exist. The context provides access to the 2D properties and methods that allow us to draw and manipulate images on a canvas element.
In the 2D Context, two-dimensional Cartesian coordinates represent the drawing surface space. That is, all points are a combination of an X value and a Y value. The X value represents the horizontal, or left to right, direction while the Y value represents the vertical, or top to bottom, direction. The origin is in the upper left hand corner. That means that as a point moves rightward, its X value increases. Likewise, as an object moves downward, the Y value increases. This is a traditional setting in the computer graphics world, but it can be changed using transforms.

The 2D Context type is part of the HTML specification and is an implementation of the CanvasRenderingContext2D interface. This means that when the context identifier ‘2d’ is used as an argument of getContext() method of a Canvas Element the user agent must return a new CanvasRenderingContext2D object. The API interface conformance is defined here.
The API
The CanvasRenderingContext2D API interface allows you to draw and manipulate images and graphics on a Canvas Element. The interface puts at your disposal multiple functions some of which we used in the introductory article of this series.
The CanvasRenderingContext2D API contains an entire group of functions devoted to drawing shapes. In addition, the color of the geometry drawn can be set using the fillStyle and strokeStyle attributes.
Let us cover some of those functions in more detail.
Rectangles
Like any other formal drawing API, in CanvasRenderingContext2D you have to specify every vertex of a polygon in order to draw a geometric figure on the screen. However, rectangles are an exception; the API provides three functions solely for this purpose.
fillRect(x, y, w, h) - draws the given rectangle onto the canvas.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); //create a gradient object var gradient = ctx.createLinearGradient(0, 0, 300, 0); // Add the colors with fixed stops at 1/4 of the width. gradient.addColorStop("0","magenta"); gradient.addColorStop(".25","blue"); gradient.addColorStop(".50","green"); gradient.addColorStop(".75","yellow"); gradient.addColorStop("1.0","red"); // Set the gradient as a fill pattern ctx.fillStyle = gradient; // Draw the rectangle ctx.fillRect (0,0,300,250); // Set the fill pattern to red ctx.fillStyle = "red"; // Draw the rectangle ctx.fillRect(250,300,300,250); // Set the fill pattern to the hex code for yellow // Notice how we are using CSS color patterns ctx.fillStyle = "#FFFF00"; // Draw the rectangle ctx.fillRect(500,0,300,250);

strokeRect(x, y, w, h) - draws the box that outlines the given rectangle onto the canvas.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); // Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); // Blue rectangle outline ctx.lineWidth = "3"; ctx.strokeStyle = "blue"; ctx.strokeRect(5, 5, 300, 250); // Red rectangle outline ctx.lineWidth = "5"; ctx.strokeStyle = "red"; ctx.strokeRect(150, 200, 300, 150); // Green rectangle outline with round corners ctx.lineJoin = "round"; ctx.lineWidth = "10"; ctx.lineWidth = "7"; ctx.strokeStyle = "green"; ctx.strokeRect(250, 50, 150, 250);

clearRect(x, y, w, h) - clears all pixels on the canvas in the given rectangle to transparent black.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); // Clear the center 80% of the canvas. ctx.clearRect(canvas.width * .1, canvas.height * .1, canvas.width * .8, canvas.height * .8);

Path API
In order to draw convex and concave polygons different from rectangles we need to use a Path object. When using Path objects we need to define each polygon segment using the lineTo() function and then instruct the API to draw the define path using the stroke() function.
lineTo(x, y) - Adds a new point to a sub-path and connects that point to the last point in the sub-path by using a straight line.
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); // Let's translate the drawing to the center of the canvas ctx.translate( canvas.width/2, canvas.height/2); // Set the stroke pattern to red ctx.strokeStyle = "#FF0000"; // Set the fill pattern to grey ctx.fillStyle = "grey"; // Set the triangle side size var side = 200; // Calculate the triangle height var height = side * Math.cos( Math.PI / 4 ); // Reset the path ctx.beginPath(); // Let's move to our starting point ctx.moveTo( 0, -height / 2); // Let's draw our triangle lines ctx.lineTo( -side / 2, height / 2); ctx.lineTo(side / 2, height / 2); ctx.lineTo(0, -height / 2); ctx.stroke(); ctx.fill(); // Close the path ctx.closePath();

Other Shared Path API methods
The API also provides other functions with the purpose of drawing curved shapes.
arc(x, y, radius, startAngle, endAngle, anticlockwise) - Adds points to a path that represents an arc.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); var colors = [ "#01C001", "#0152D8", "#E46F0F", "#868686", "#FF2727", "#FF0000" ]; var colorIndex = 0; for (var i = 0; i < 2; ++i) { for (var j = 0; j < 3; ++j) { ctx.beginPath(); ctx.strokeStyle = colors[colorIndex]; var x = 25 + j * 50; var y = 25 + i * 50; var radius = 20; var startAngle = 0; var endAngle = Math.PI + (Math.PI * j) / 2; var anticlockwise = i % 2 == 0 ? false : true; ctx.arc(x, y, radius, startAngle, endAngle, anticlockwise); // Draw the arc. ctx.stroke(); colorIndex++; } }

arcTo(x1, y1, x2, y2, radius) - Draws an arc of a fixed radius between two tangents that are defined by the current point in a path and two additional points.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); // Drawing tangents ctx.beginPath(); ctx.lineWidth = "3"; ctx.strokeStyle = "black"; // Horizontal line ctx.moveTo(80, 100); ctx.lineTo(240, 100); // Vertical line ctx.moveTo(200, 60); ctx.lineTo(200, 220); // Draw the lines ctx.stroke(); ctx.beginPath(); ctx.strokeStyle = "red"; ctx.lineWidth = "5"; ctx.moveTo(120, 100); // Horizontal line ctx.lineTo(180, 100); // Draw a horizontal line. // Draw an arc to connect the horizontal and the vertical line ctx.arcTo(200, 100, 200, 120, 20); // Vertical line ctx.lineTo(200, 180); // Continue with a vertical line of the rectangle. // Draw the lines ctx.stroke(); // Use the translate method to move the second example down. ctx.translate(0, 220);

Text
The CanvasRenderingContext2D API also provides functions to draw text, which is another exception when compared to other mature API’s like OpenGL or Direct3D.
font - gets or sets the font for the current context.
fillText(text, x, y) - Renders filled text to the canvas by using the current fill style and font.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); gradient = ctx.createLinearGradient(0, 0, canvas.width, 0); // Add the colors with fixed stops at 1/4 of the width. gradient.addColorStop("0", "magenta"); gradient.addColorStop(".25", "blue"); gradient.addColorStop(".50", "green"); gradient.addColorStop(".75", "yellow"); gradient.addColorStop("1.0", "red"); // Set the fill pattern ctx.fillStyle = gradient; // Set the font ctx.font = "italic 200 36px/2 Unknown Font, sans-serif" var i; for (i = 0; i < 450; i += 50) { ctx.fillText("Canvas 2D", i, i); }

strokeText(text, x, y) - Renders the specified text at the specified position by using the current font, lineWidth, and strokeStyle property.
Example:
// Get a reference to the canvas var canvas = document.getElementById('canvas'); // Get a reference to the drawing context var ctx = canvas.getContext('2d'); gradient = ctx.createLinearGradient(0, 0, canvas.width, 0); // Add the colors with fixed stops at 1/4 of the width. gradient.addColorStop("0", "magenta"); gradient.addColorStop(".25", "blue"); gradient.addColorStop(".50", "green"); gradient.addColorStop(".75", "yellow"); gradient.addColorStop("1.0", "red"); // Set the fill pattern ctx.strokeStyle = gradient; // Set the font ctx.font = "italic 200 36px/2 Unknown Font, sans-serif" var i; for (i = 0; i < 450; i += 50) { ctx.strokeText("Canvas 2D", i, i + 50); }

Transformations
This set of functions is meant to alter the drawn geometry location and orientation within the Canvas Element. Their use involves some basic math understanding, and we will cover them separately in other articles. Regardless here is a list of all the available functions for this purpose.
State
save
restore
Matrix Transformations
scale
rotate
translate
transform
setTransform
Examples
The source code for these examples is hosted on GitHub and you can download it and do whatever you want with it. Likewise, the live demos can be accessed by clicking this link.
Conclusion
In this article, we discussed basic drawing functions provided the CanvasRenderingContext2D API. This API is widely used for different purposes, among them, chart libraries and data visualization tools. This demonstrates the importance of these graphic tools in a real world example, like data driven applications.
Moving forward in this series, we will use these functions to create simple 2D games. Once we have mastered the simpler Canvas 2D Context API, we will move to the 3D space.
2 notes
·
View notes
Text
The Graphics Pipeline & The Rendering Context
Introduction
This is the second article of an ongoing series about game development for the Web. In the first article, we talked briefly about the <canvas> HTML element, our “Web based painting fabric” and the portal which makes possible conveying to the user a game experience.
In this delivery, we will discuss the graphics pipeline, that is, the order in which we process data before rendering anything on the screen, and the rendering context. These are major concepts in computer graphics and understanding them will become crucial moving forward in the series.
The graphics pipeline
In computer graphics, the word “render” is the process of generating an image on your computer screen from a geometric object. Rendering, is a multi-step process, often described in terms of a graphics pipeline.
At the start of the pipeline, we feed in polygon (triangle) vertices and color information about them (which represent the models that comprise a scene), bitmaps to paint onto some of the scene objects or use as backgrounds, and perhaps some locations of lights. At the other end of the pipeline a two-dimensional color image appears in a memory location called the frame buffer.
The paint brush
Ever since the early days of real-time graphics rendering, the triangle has been the paintbrush with which scenes are drawn. Although modern GPUs can perform all sorts of flashy effects and calculations to cover up this dirty little secret, underneath all, triangles are still the geometric objects with which they work.
The space transformations
There are several different coordinate systems associated with the rendering pipeline. The vertices of a model are typically stored in object space or model space, a coordinate system that is local to the particular model and used only by that model. The position and orientation of each model are often stored in world space, a global coordinate system that ties all of the object spaces together. Before an object can be rendered, its vertices must be transformed into camera space (also called eye space), the space in which the x and y axes are aligned to the display and the z-axis is parallel to the viewing direction. It is possible to transform vertices from model space directly into camera space by concatenating the matrices representing the transformations from model space to world space and from world space to camera space. The product of these transformations is called the model-view transformation. The Matrix Multiply step transforms the controlling vertices of our polygons, into rotated or translated locations away from their standard positions through these different spaces.
Rasterisation
The rasteriser takes each triangle, clips it and discards parts that are outside of the screen, and breaks the remaining visible parts into pixel-sized fragments. The outputs are also interpolated across the rasterised surface of each triangle, assigning a smooth gradient of values to each fragment.
The location into which we draw these pixels is the frame buffer. If we have some bitmaps that we want to use, say as backgrounds, we copy them into the frame buffer during rasterisation step. The raster operator tends to combine several bitmaps in an operation called bit blip, which stands for bit-level block transfer.
Usually the frame buffer is an off-screen memory bitmap or a RAM region on the graphics card, as it is too visually disturbing to see the drawing happening bit by bit. Once the pipeline finishes filling the frame buffer, the Display Frame Buffer step makes the frame buffer image appear in a visible window. When the hardware allows it, the buffer is displayed using a techniquecalled page flipping or buffer swapping.
The idea behind buffer swapping is that, rather than moving the pixel information from one region of memory to another, you simply change the address that the graphics card uses as the base location from which to refresh the visual information in your window. Doing buffer swap for an onscreen window is harder than doing it for an entire screen, but newer graphics cards and graphics libraries allow this.
It is important to note that, in the 2D rasterisation step, the graphics system draws directly into the frame buffer. However, when rasterising a 3D scene, the system draws into an intermediate buffer called the Z-buffer.
In conclusion, the graphics pipeline describes the transformation of a vertex buffer that defines the polygons of our scene into a frame buffer, which is what is finally rendered into the screen. Of course, this operation is intensive and requires multiple passes in order to reflect changes in the scene and transformations of the shapes involved in it.
The Rendering Context
The rendering context is where all the pieces of a graphic library come together to actually display bits on an output device. As such, the rendering context implementation is unique to each type of output device. In the <canvas> element, the rendering’s context main role is to transform geometrical primitives to bits. In addition, it has to do state management of things like font, transformations, colors, etc. A render context can be thought of as the set of brushes and rules used to draw the game world. The <canvas> element can only have one rendering context per instance.
The browser window itself is managed by a rendering context which rules for showing stuff are given by a famous markup language called HTML.
Since the rendering context is the focus point for manipulating bits, there are additional methods for creating off-screen drawing surfaces and managing double buffering. The “context API”, as is called in the <canvas> element, handles the double buffering automatically for us as well as many other operations that happen off-screen.
Conclusion
It is very important to understand what the API does for us and how things get into our screen. This way we can optimize the handling of our geometric objects before feeding them to the frame buffer. Initially most of these considerations will not be taken into account since the <canvas> 2D context will take care of everything for us, but as we move forward into WebGL, by design we will have more control over the way things get drawn into the screen.
References
Weisstein, Eric W. "Coordinate System." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/CoordinateSystem.html
Groff, Joe "An intro to modern OpenGL. Chapter 1: The Graphics Pipeline" http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-1:-The-Graphics-Pipeline.html
2 notes
·
View notes
Text
Game development for the Web
Introduction
In the following series of blog posts, I will be mixing my passion for game development with the Web, in an attempt to stay true to my professional development roots. The focus of this series will be 2D game development, specifically with the CanvasRenderingContext2D API for the canvas HTML 5 element.
2D? What about 3D?
Math plays a big part in Game Development or anything related to graphics. With that in mind, it makes sense to start representing stuff in a two dimensional space, since, conceptually math is simpler and less overwhelming. After we build some confidence in our math and physics in a 2D space, making the jump to the 3D space will be intuitive and less daunting.
In addition, something to keep in mind is the fact that regardless of our perspective of spaces in games or graphics simulations our screen is a two dimensional surface. Therefore, 2D feels just right as a starting point.
Assumptions
The content of this series will assume that you have some basic JavaScript/HTML/CSS knowledge and understand at a basic level some lineal algebra concepts. In occasions, I will try to add some references to some of the math related concepts I use, but I won't go very deep into math theory.
So, without further adieu, let's jump right at it!
The Canvas Element
The <canvas> element is, basically, a drawing region to display programmable graphics in a Web Page.
Apple introduced the element in 2004 as part of their browser layout engine, WebKit (Safari). Other layout engines like Gecko (Firefox) and Presto (Opera) later adopted the element. It is now part of the WhatWG Web applications 1.0 specification, also known as HTML5; the Web Hypertext Application Technology Working Group (WHATWG) standardized it on new proposed specifications for next generation web technologies.
Relevant stuff
The canvas element was initially conceived as a self-closing tag, just like the <img> tag. This was later changed in order to have a fall back mechanism for unsupported layout engines. Therefore, in order to declare a canvas element you must have a starting tag <canvas> and a closing tag </canvas>.
By default, the <canvas> element has no border and no content. Borders for the area must be specified explicitly.
As a good practice you should specify an id attribute (to be referenced in script code), and a width and height attribute to define the size of the canvas.
In case the size is not specified explicitly the default size of the canvas is 300px * 150px (width * height).
All drawing on the canvas must be done from JavaScript code.
It is encouraged to avoid the usage of the <canvas> element when a more suitable element is available for our needs.
When no styling rules are applied to the canvas it will initially be fully transparent.
Even though the element is supported by all major browsers I suggest that you go here and confirm that the rendering engine of the browser of your choice does support the canvas element and its context API’s.
Basic usage
Let us define a canvas.
<canvas id="mycanvas" width="300" height="150"></canvas>
The <canvas> element has only two attributes: width and height. These are both optional and can be set using DOM properties. By default, the canvas will initially be 300 pixels wide and 150 pixels high. The element size can be changed arbitrarily using CSS.
Fallback content
HTML5 is relatively a new standard. Moreover, even though, the <canvas> element is widely supported across the current generation of browsers, older browsers (in particular, versions of Internet Explorer earlier than version 9) do not support it.
The fallback content is a feature that allows displaying alternate content to unsupported browsers. Achieving this is very straightforward; the alternate content must be placed inside of the <canvas> element. This works because the graphical operations will not be performed by the browser’s layout engine but by client side code (JavaScript).
<canvas id="mycanvas" width="300" height="150"> The canvas element is not supported by your browser. </canvas>
Adding borders
For simplicity purposes, we will add borders to our canvas. This way we will have some visuals on the drawing area. I am a fan of following web standards, so instead of dropping inline style rules for our HTML element, I will use CSS.
<style type="text/css"> canvas { border: 1px solid black; } </style>
The Rendering Context
By default, the fixed-size drawing surface created by the <canvas> element is blank. The content of this so called surface can be accessed by one (or more) rendering context. As of the moment the <canvas> element supports two rendering contexts the 2D rendering context, which is the default, and WebGL, which uses a 3D context based on OpenGL ES. We will dig deeper into rendering contexts in the following article, for the time being we will use the 2D rendering context for our following examples.
The <canvas> element has a method called getContext(), used for two purposes:
Check for canvas support programmatically.
Obtain the rendering context and its drawing functions. getContext() takes one parameter, the type of context. In our case the context type will simply be ‘2d’ (for now ;).
var canvas = document.getElementById('mycanvas'); if (canvas.getContext) { var ctx = canvas.getContext('2d'); // From now on we will use our context to draw into the canvas. } else { // Fall back code goes here }
A simple example
Let us draw a simple rectangle and color it purple.
Here's the code:
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>The canvas element</title> <style type="text/css"> canvas { border: 1px solid black; } </style> <script type="text/javascript"> function draw() { var canvas = document.getElementById('mycanvas'); if (canvas.getContext) { var ctx = canvas.getContext('2d'); ctx.fillStyle = "rgb(200,0,200)"; ctx.fillRect(70, 25, 150, 100); } } </script> </head> <body onload="draw();"> <canvas id="mycanvas" width="300" height="150"> The canvas element is not supported by your browser. </canvas> </body> </html>
Here's the result:
Note: Notice how in the code there's a callback function that triggers when the body element finalizes its load process.
More Rectangles
Obviously, it is possible to draw more than a shape at the same time, as well as text. Here is a more complex example:
To achieve this, I just added extra calls to the fillRect function and used a function for the font rendering. The code is really straight forward.
<script type="text/javascript"> function draw() { var canvas = document.getElementById('mycanvas'); if (canvas.getContext) { var ctx = canvas.getContext('2d'); ctx.fillStyle = "rgb(246,83,20)"; ctx.fillRect (10, 10, 55, 50); ctx.fillStyle = "rgb(124,187,0)"; ctx.fillRect (70, 10, 55, 50); ctx.fillStyle = "rgb(0,161,241)"; ctx.fillRect (10, 70, 55, 50); ctx.fillStyle = "rgb(255,187,0)"; ctx.fillRect (70, 70, 55, 50); ctx.font = "30px Arial Black"; ctx.fillStyle = "Black"; ctx.fillText("Microsoft", 140, 75); } } </script>
The rendering context API's will be discussed with depth on the next article. There, we will go deeper into the functions available on the context "object" and what is their purpose.
For now you have a test bed to play with. Have fun, explore.
Conclusion
Web pages are available in any device with a Web Browser; this is a very strong reason to consider porting games for "web play". Even though the maturity of the API’s for these purposes is rather low, it is worth considering this option due to the medium's mass consumption and availability. The Internet is everywhere and for some, the only way to use it, is through a Web Browser.
Moreover, some breakthrough achievements have been made in order to run C++ native code in a browser. Both Mozilla and Google have made their moves, one with asm.js and the other with Native Client. This kind of technology is what made possible to run the Unreal Engine 3 in Firefox.
Finally, some major companies are taking some measurements to enforce HTML5 features into their Web capable ecosystems, making the <canvas> element available almost everywhere.
Dedication
The following series of articles is dedicated to Amhed A. Herrera. A special friend, whose passion and interest for technology have served as inspiration for my professional endeavors.
3 notes
·
View notes
Text
How to create a file syncing app using .NET & Windows Azure Blob Storage Services
Nowadays we all use cloud storage in some sort of way. Whether we use a file syncing service, e.g. Dropbox, or one of our favorite mobile apps use it in the back-end, it's all around us.
It's important to add that cloud storage plays a major role in the enterprise environments since it's part of the Backup and Recovery strategy of many major companies.
Today we'll dig into tools provided by the .NET framework and the Windows Azure SDK to build our own "file syncing service" comprised by a client application for the desktop and we'll also do some listing of our cloud stored files from a web page.
The application flow.
In order to sync files into a cloud based storage container formal applications like Dropbox, Google Drive and such use a complex set of rules to decide if an action needs to be made upon the changes that occur to a file in a desired file system directory. Our service is going to be a little bit more simple, we're are going to upload all the files that have been changed in the desired directory regardless of the CRUD (CREATE, UPDATE, DELETE) operation made to it.
Here's a flow chart describing our application's flow:

This simple scheme should suffice for our demonstration. Feel free to add more complex processes to the mix.
The FileSystemWatcher Class
Instead of having to create our own mechanism of waiting for changes to a directory we'll use an useful class that the .NET framework provides: The FileSystemWatcher class whose solely purpose is to listen to the file system change notifications and raises events when a directory, or file in a directory, changes.
FileSystemWatcher provides a set of notification filters that allow us to identify with ease changes to file attributes, last write dates and times or simply file size. We can also use wildcards to filter the files we are watching by their types, e.g. if we set *.png as our file type filter, then we are only get notified by changes made to PNG files. It's important to note that for the time being filters only support one wildcard setting. Which implies that if we want to have a set of rules for specific file types we're going to have to get them all and filter them manually.
For simplicity purposes we are going to use the example displayed on the MSDN page as our base program and build from there.
Here's the code:
using System; using System.IO; using System.Security.Permissions; public class Watcher { public static void Main() { Run(); } [PermissionSet(SecurityAction.Demand, Name="FullTrust")] public static void Run() { string[] args = System.Environment.GetCommandLineArgs(); // If a directory is not specified, exit program. if(args.Length != 2) { // Display the proper way to call the program. Console.WriteLine("Usage: Watcher.exe (directory)"); return; } // Create a new FileSystemWatcher and set its properties. FileSystemWatcher watcher = new FileSystemWatcher(); watcher.Path = args[1]; /* Watch for changes in LastAccess and LastWrite times, and the renaming of files or directories. */ watcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.FileName | NotifyFilters.DirectoryName; // Only watch text files. watcher.Filter = "*.txt"; // Add event handlers. watcher.Changed += new FileSystemEventHandler(OnChanged); watcher.Created += new FileSystemEventHandler(OnChanged); watcher.Deleted += new FileSystemEventHandler(OnChanged); watcher.Renamed += new RenamedEventHandler(OnRenamed); // Begin watching. watcher.EnableRaisingEvents = true; // Wait for the user to quit the program. Console.WriteLine("Press \'q\' to quit the sample."); while(Console.Read()!='q'); } // Define the event handlers. private static void OnChanged(object source, FileSystemEventArgs e) { // Specify what is done when a file is changed, created, or deleted. Console.WriteLine("File: " + e.FullPath + " " + e.ChangeType); } private static void OnRenamed(object source, RenamedEventArgs e) { // Specify what is done when a file is renamed. Console.WriteLine("File: {0} renamed to {1}", e.OldFullPath, e.FullPath); } }
It's important to understand that our application needs to use FullTrust as its permission profile. This can be achieved using the attribute called PermissionSet. Don't worry about this all this type of applications need that kind of permission in order to access file system information of Reads/Writes to our directory (Remember the UAC popup you got when you installed Dropbox, Google Drive, etc. well this is that).
What do we have?
Amazingly with these few lines of code we already have a system that "listens" to a specific directory and notifies CRUD operations on text files.
If you run this code this is what you should see:

The Cloud
Now that we have our directory listening in place, let's setup our cloud storage. I'll use Windows Azure Blob Storage Service to host my files in the cloud. Keep in mind that this can be changed anytime for any other option out there, like Amazon S3.
Glossary
Cloud storage is a model of networked storage where data is stored, accessed and managed third party's infrastructure, allowing the user to take advantage of the scale and efficiency of virtualized pools of storage.
Blob (binary large object), according to SearchSQLServer, is a collection of binary data stored as a single entity in a database management system. Blobs are typically images, audio or other multimedia objects, etc.
Container, grouping of a set of blobs.
You can read more about Windows Azure Blob Storage Service specific terminology and the code needed to store, access and manage files here.
Let's code
From this point on I'll assume that you have a Windows Azure Storage account setup and you have installed the Windows Azure SDK. If you need help with this, follow the steps from this guide.
In order to access our storage account from code we need to define a connection string for it, the format for the app.config / web.config entry is as follows:
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=[AccountName];AccountKey=[AccountKey" />
Just replace the information with the strings provided in the Windows Azure portal. You can read more about how to configure the connection strings here and here.
First let's add the references to the namespaces we'll use.
using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Blob;
With our connection string in place it's time to connect to our cloud storage account.
// Retrieve the connection string node from the config file // var config = ConfigurationManager.ConnectionStrings["StorageConnectionString"]; // Let's create an instance of our storage account // var storageAccount = CloudStorageAccount.Parse(config.ConnectionString);
Now let's create or access our blob container.
// Create the blob client. // var blobClient = storageAccount.CreateCloudBlobClient(); // Retrieve a reference to a container. // container = blobClient.GetContainerReference("files"); // Create the container if it doesn't already exist. // container.CreateIfNotExists(); // Setting the contents of the container public. // container.SetPermissions( new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
As you can see I'm using lower case for the container name. There's a set of naming conventions in Windows Azure Cloud Storage when it comes to naming. For example you should avoid using special characters and upper cases when naming your containers. These constraints are in place in order to generate valid URI's to our files in the cloud. I'd recommend reading about it before attempting to start using the service. Here's more information about this.
The blob container instance will represent our interface between the blob storage in the cloud and our source of files. Through it we'll be able to access and manage our files with ease.
Uploading to the cloud
Now that we have a container to put our files in, we need to write the code that uploads the files to it. In order to achieve this we need to write very little code.
// Retrieve reference to a blob named "myfile". // CloudBlockBlob blockBlob = container.GetBlockBlobReference("myfile"); // Create or overwrite the "myfile" blob with contents from a local file. // using (var fileStream = File.OpenRead(localFilePath)) { blockBlob.UploadFromStream(fileStream); }
This is all we need. But we need to take into account to things:
If we choose to create blob references based on our local file name we need to strip down the string (that represents the file name) and put everything in lower case.
The UploadFromStream method will overwrite any file in the container using the same blob reference.
The final client code
With all the discussed updates out code is going to end up looking like this:
using System; using System.Configuration; using System.IO; using System.Security.Permissions; using System.Text; using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Blob; namespace FileSystemWatcher { public class Watcher { private static CloudBlobContainer _container; public static void Main() { // Retrieve the connection string node from the config file // var config = ConfigurationManager.ConnectionStrings["StorageConnectionString"]; // Let's create an instance of our storage account // var storageAccount = CloudStorageAccount.Parse(config.ConnectionString); // Create the blob client. // var blobClient = storageAccount.CreateCloudBlobClient(); _container = CreateContainer("mycontainer", blobClient); Run(); } public static CloudBlobContainer CreateContainer(string containerName, CloudBlobClient blobClient) { // Retrieve a reference to a container. _container = blobClient.GetContainerReference(containerName); // Create the container if it doesn't already exist. _container.CreateIfNotExists(); // Setting the contents of the container public. _container.SetPermissions( new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob }); return _container; } public static void UploadBlob(string filePath) { string fileName = Path.GetFileName(filePath); // Retrieve reference to a blob named "myblob". CloudBlockBlob blockBlob = _container.GetBlockBlobReference(RemoveSpecialCharacters(fileName)); // Create or overwrite the "myblob" blob with contents from a local file. using (var fileStream = File.OpenRead(filePath)) { blockBlob.UploadFromStream(fileStream); } } [PermissionSet(SecurityAction.Demand, Name = "FullTrust")] public static void Run() { string[] args = Environment.GetCommandLineArgs(); // If a directory is not specified, exit program. if (args.Length != 2) { // Display the proper way to call the program. Console.WriteLine("Usage: Watcher.exe (directory)"); return; } // Create a new FileSystemWatcher and set its properties. var watcher = new System.IO.FileSystemWatcher { Path = args[1], NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.FileName | NotifyFilters.DirectoryName, Filter = "*.*" }; /* Watch for changes in LastAccess and LastWrite times, and * the renaming of files or directories. */ // Add event handlers. watcher.Changed += OnChanged; watcher.Created += OnChanged; watcher.Deleted += OnChanged; watcher.Renamed += OnRenamed; // Begin watching. watcher.EnableRaisingEvents = true; // Wait for the user to quit the program. Console.WriteLine("Press \'q\' to quit the sample."); while (Console.Read() != 'q') { // Just wait and bleed. } } // Define the event handlers. private static void OnChanged(object source, FileSystemEventArgs e) { // Specify what is done when a file is changed, created, or deleted. Console.WriteLine("File: " + e.FullPath + " " + e.ChangeType); Console.WriteLine("Uploading..."); UploadBlob(e.FullPath); Console.WriteLine("{0} uploaded successfully to container {1}.", e.Name, _container.Name); } private static void OnRenamed(object source, RenamedEventArgs e) { // Specify what is done when a file is renamed. Console.WriteLine("File: {0} renamed to {1}", e.OldFullPath, e.FullPath); Console.WriteLine("Uploading..."); UploadBlob(e.FullPath); Console.WriteLine("{0} uploaded successfully to container {1}.", e.Name, _container.Name); } public static string RemoveSpecialCharacters(string str) { var sb = new StringBuilder(); foreach (char c in str) { if ((c >= '0' && c <= '9') || (c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z' || c == '.')) { sb.Append(c); } } return sb.ToString(); } } }
It's worth mentioning that this is a very simple implementation and does not have special rules based on file types or changes made to them. Our program is always going to upload every file regardless of what happened to the file or if there's already a blob reference with the same name in the container. Feel free to modify this so it fits your own needs.
If everything works as expected your program should be uploading files to your container already. If you want to check if it's working properly just go to the Windows Azure portal and look inside of your container. You should have some files listed there.

Listing our uploaded files
Now we can write our web front-end for our container. Think of it as the web interface of Dropbox or Google Drive. In order to achieve this, we just need to list the files in the container and then format the way they get displayed however we want.
In order to list the files in a container we need to write very little code.
// Loop over items within the container and output the length and URI. foreach (IListBlobItem item in container.ListBlobs(null, false)) { var blockBlob = item as CloudBlockBlob; if (blockBlob != null) { var blob = blockBlob; files.Add(new BlobInfo { Name = blob.Name, Length = blob.Properties.Length, Uri = blob.Uri.ToString() }); } else { var blob = item as CloudPageBlob; if (blob != null) { CloudPageBlob pageBlob = blob; files.Add(new BlobInfo { Name = blob.Name, Length = blob.Properties.Length, Uri = blob.Uri.ToString() }); } else { var blobDirectory = item as CloudBlobDirectory; if (blobDirectory != null) { CloudBlobDirectory directory = blobDirectory; files.Add(new BlobInfo { Name = blob.Name, Length = blob.Properties.Length, Uri = blob.Uri.ToString() }); } } } }
You can create a WebForms or MVC site and just display in a page or view our container contents.
In my case I chose to use an MVC site and this is how my files look when listed in my front-end.

My front-end is hosted here just in case you want to download the files in my storage container.
Ideally you could modify the file listening implementation to be more "smart" and use thread safe queue's that evaluate files by priority or whatever rule that comes to your mind. It's also a cool scenario to play with the async and await constructs that were included in the language (C#) in order to support asynchronous programming.
Conclusion
Cloud computing has recently become the "main event" in the IT landscape. There's a lot of fizz & buzz about it and how it affects our lives on a day to day basis. The "cloud" has become the back-end architecture that powers a lot of the applications we use Today whether on the desktop, the browser or our mobile devices.
The software development ecosystem has been revolutionized by the cloud computing concept, allowing applications to be available to everyone regardless of the platform. I encourage you to exploit this resources we have at hand to deliver more awesome apps that change people's day to day lives.
0 notes
Text
Instant Glew: Review

Graphics programming is a hot topic because of the growth of game development and animated movies in recent years. Thanks to them, Animated graphics have gone from focused markets to mainstream. As a consequence, there is a fast pace of innovation and huge competition at the 3D graphics hardware realm. Therefore rendering API's must keep up with vendors in order to provide interfaces for cutting edge features that might make the difference in terms of quality in the content we consume, this is what makes the OpenGL Extension Wrangler Library (Glew) relevant.
Instant Glew is part of Packt pub's Instant series described as "a short, fast, focused guide delivering immediate results". The book is listed at USD$12.74 for a digital download in Packt pub's site and USD$19.58 for the paperback edition in Amazon. Before reading the material I had high hopes for it to become my default reference material on a not very rich and deep subject, but after taking a couple of days to go through it thoroughly I ended up a little disappointed.
The only good thing about the book is that it gives a good introduction to Glew and OpenGL extensions as a whole. It provides detailed information as of where OpenGL extensions stand in the graphics pipeline and why to use them. Other than that, it feels like a long technical blog post that got published as a book.
The book fails to explore OpenGL & Glew multi-platform capabilities by only providing a thorough description of how to "install" Glew in Windows with Visual Studio. I understand, this is a tendency in many technical books since this is the most common setting in the gaming industry. But, there's no harm in pointing users to other sources of information for setup instructions in other platforms. Specially since merely most of the content in the book revolves around the installation guide.
The code used in the examples uses legacy OpenGL & GLSL programming styles. There's code for loading and compiling shaders but there's no convincing description for why to use them and what's the difference between the fixed function pipeline and the shader pipeline. In addition, some of the extensions used in the examples are part of the OpenGL core already. My personal opinion is, that there should be a special attention to the OpenGL extension types and the meaning of the sufix in the extension functions (i.e. ARB). Also the most important feature of Glew is determining at runtime which are the OpenGL extensions supported by the target platform.
Conclusion
I don't see the point of paying for an installation guide when the documentation for this purposes is of public domain as well as other thorough information about OpenGL extensions in general. There's no special reason that separates the book's content from information you can already can find for free on the web.
0 notes
Photo

I don't tend to do this kind of short announcement posts but I thought this was worth sharing. I've been blogging for more than two years, but the truth is that since I moved my blog to Tumblr I've been more serious about it. So yes, Mr. Roa turned 2 today! ( Jul 18, 2013 )
Thanks for reading my content and expect much more on the years to come!
0 notes
Text
TIP/Trick: How to retrieve a sub-string between two delimiters using C++
The std::string structure exists as an alternative to the NULL terminated char array commonly used in C. Such structure provides methods to do simple string management tasks that are daunting and have to be achieved by using dynamic memory dirty tricks and memory leak prone functions. Unless, of course, you implement your own.
Last week I found myself wanting to retrieve a sub-string between two delimiters. People who write programs that parse HTML, XML or plain text files, might find this a common problem for which they may already use an existing solution from a library like boost.
Yet, with learning in mind I wrote my own.
The Code
The code to achieve this is fairly straight forward:
const std::string emptyString = ""; std::string ExtractString( std::string source, std::string start, std::string end ) { std::size_t startIndex = source.find( start ); // If the starting delimiter is not found on the string // stop the process, you're done! // if( startIndex == std::string::npos ) { return emptyString; } // Adding the length of the delimiter to our starting index // this will move us to the beginning of our sub-string. // startIndex += start.length(); // Looking for the end delimiter // std::string::size_type endIndex = source.find( end, startIndex ); // Returning the substring between the start index and // the end index. If the endindex is invalid then the // returned value is empty string. return source.substr( startIndex, endIndex - startIndex ); }
Client code call
// Returns hello world // std::string str = "<div>hello world</div>"; std::string extracted = ExtractString( str, "<div>", "</div>" ); // Returns 12 // std::string str = "(12)"; std::string extracted = ExtractString( str, "(", ")" );
I hope you find this useful.
1 note
·
View note
Text
TIP/Trick: How to count words in a text file using C++
Sometimes we may find ourselves handling data files that contain information whose data integrity needs to be checked or compared against rules related to byte size, amount of lines or amount of WORDS. Hence the need of having a function to count the amount of words in a string in our string extensions library.
Working with files in middle level languages like C & C++ can be an obscure task for people who are getting started with programming. High level languages like C#, Java or Ruby do a pretty good job creating an intuitive abstraction layer to avoid overwhelming the programmer with the raw compelling nuances when handling files.
Loading the file
I've implemented the countWords function for both, null terminated char sequences and strings, as well as the read file function for each case. Heres the code:
int readFileToBuffer( char * filePath, char *& buffer ) { /* * VC++ will give you a warning message when using fopen. * The function has been flagged as unsafe and you have two choices. * First: use fopen_s http://msdn.microsoft.com/en-us/library/z5hh6ee9(v=vs.80).aspx * This will limit your implementation to work only in Windows based machines. * Second: add the _CRT_SECURE_NO_WARNINGS to the compiler preprocessor definitions * This will get rid of the warning message. * Microsoft arguments to flag this function are strong and appropriate, nevertheless * I know what I'm doing and that's why I chose to stick to the fopen function. * If you want to know more about this google is your friend ;-) */ FILE * file = fopen( filePath, "r" ); if( file == NULL ) return 0; fseek( file, 0, SEEK_END ); long fsize = ftell( file ); // http://www.cplusplus.com/reference/cstdio/ftell/ fseek( file, 0, SEEK_SET ); buffer = ( char * ) malloc( fsize + 1 ); fread( buffer, fsize, 1, file ); fclose( file ); buffer[fsize] = 0; return fsize; } int readFileToBuffer( char * file, std::string& buffer ) { std::ifstream t( file ); t.seekg(0, std::ios::end); // http://www.cplusplus.com/reference/istream/istream/tellg/ int fsize = static_cast< int >( t.tellg() ); buffer.reserve( fsize ); t.seekg( 0, std::ios::beg ); buffer.assign( ( std::istreambuf_iterator< char >( t )), std::istreambuf_iterator< char >() ); return fsize; }
Counting the Words
Disclaimer: This is a very simple implementation and some edge cases have been obviated. This is just an attempt of pointing in the right direction people that struggle when trying to approach this task.
int countWords( const char* str ) { if ( str == NULL ) return 0; int numWords = 1; while ( *str++ != NULL ) if( *str == 32 ) numWords++; return numWords; } int countWords( const std::string& str ) { if ( str.empty() ) return 0; int numWords = 1; for( unsigned int i = 1; i < str.length(); ++i ) if( str[i] == 32 ) numWords++; return numWords; }
The client code for this implementation is very simple:
readFileToBuffer( "loremipsum.txt", string );
int x = countWords( string );
The full implementation of this source code can be found here.
Enjoy!
4 notes
·
View notes
Text
Highly divisible triangular number using primes
I found myself browsing through Project Euler Today during my free time. After reading a couple of problems I payed special attention to Problem #12.
The problem goes like this:
The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred divisors?
The Solution
Getting the answer to this problem is fairly simple even though the obvious might not be the best way to go. Let's see what can we learn from this.
Brute force!
Brute forcing your way through this problem is a possible solution, though it might take a long wait.
for( int i = 1; true; ++i ) { int triangle = 0; // Obtaining the triangle number for( int j = 1; j <= i; ++j ) triangle += j; int factors = 0; // Counting the factors for( int k = 1; k <= triangle; ++k ) { if( triangle % k == 0 ) ++factors; } // If factors are bigger than 499 then exit if( factors > 499 ) { std::cout << "Triangle number: " << triangle << " factors: " << factors << std::endl; break; } }
Results
Triangle number: 76576500
Time: 974.812segs ≈ 16mins
Even though this is not the optimal solution for obvious reasons, it's a good way of proving yourself that you understood the problem and that you know how to get to the answer.
When solving problems like this, I tend to write the brute force version of the solution. In addition, I use small test cases just to proof I'm on the right track; and then optimize in order to solve the problem efficiently.
Optimizing: Triangle Numbers
First let us dig into triangular or triangle numbers. According to the problem description, triangle numbers come from a sequence. If we use the sequence definition in order to get the triangle numbers, we can remove one of the bottlenecks of our solution.
Let's start by saying that triangle numbers are the additive analog of factorial numbers. Thus the sequence definition is:
This article goes into depth about demonstrating how T(n) = n ( n + 1) / 2.
If we modify our code accordingly we must end up with something like this:
for( int i = 1; true; ++i ) { // Obtaining the triangle number int triangle = i * ( i + 1 ) / 2; int factors = 0; // Counting the factors for( int k = 1; k <= triangle; ++k ) { if( triangle % k == 0 ) ++factors; } // If factors are bigger than 499 then exit if( factors > 499 ) { std::cout << "Triangle number: " << triangle << " factors: " << factors << std::endl; break; } }
It's worth mentioning that addition is a fairly fast operation; however, calculating the triangle number is not what is needed for a major overhaul in our original solution. Eliminating the loop to calculate the triangle number is a good improvement in terms of readability rather than improving the time of execution of our solution.boosting the amount of time it takes to find the actual number of factors.
Results
Triangle number: 76576500
Time: 951.555segs ≈ 15mins
This update its not worthless, but its not great either. Regardless of the technicalities of it, learning in depth about triangle numbers is a good addition (no pun intended ;) to our knowledge base and may help to clarify further math/programming problems.
Triangle Numbers: Alternate Methods
Division is a costly operation. Purists avoid using it whenever possible. We could change our code to avoid division with little effort.
int triangleNumber( int n ) { int data = 0; int next = 1; while( n > 0 ) { data = data + next; next++; n--; } return data; }
I'll let you guys decide on which is the most suitable solution for your standards. In terms of performance either way should be fine.
Optimizing: Integer Factorization
Now people tend to interpret Integer factorization many things. Sometimes we wrongly assert that integer factorization is prime decomposition (which is the base of the fundamental theorem of arithmetic); nonetheless, we could also assume that integer factorization is to identify all the positive factors that divide a number evenly. It's important for us to learn the differences between these two concepts since they are not the same. And obviously their implementation in code is also very different.
Let us take the number 28 for example.
The prime decomposition of 28 can be expressed as:
∴ 28 = 2² × 7
In the other hand the divisors of 28 are: 1, 2, 4, 7, 14, 28.
As you can see from the comparison above, some of the divisors can also be expressed as prime products. The problem statement requires counting the divisors instead of the prime factors, but we could use both definitions to get the same result.
I will talk about divisors and a shortcut to counting them before going into prime decomposition; primes are a world apart and need special attention.
Exact Division
The definition of the arithmetic division states that the dividend must be expressed by the product of the divisor and the result of the actual division.
For example:
28 ÷ 4 = 7 ∴ 7 × 4 = 28
This special property allows us to use a shortcut to calculate the positive factors of an integer. The basic idea is that if we double the amount of positive factors of the integer square root approximation of the number, we will find the actual amount of factors.
If we modify our code with that observation in mind we can see a ridiculous performance boost when comparing the results with the previous implementation.
for( int i = 1; true; ++i ) { // Obtaining the triangle number int triangle = i * ( i + 1 ) / 2; int factors = 0; // Counting the factors for( int k = 1; k <= sqrt( triangle ); ++k ) { if( triangle % k == 0 ) ++factors; } factors *= 2; // If factors are bigger than 499 then exit if( factors > 499 ) { std::cout << "Triangle number: " << triangle << " factors: " << factors << std::endl; break; } }
Results
Triangle number: 76576500
Time: 0.224segs
As you can see, the time of execution has been reduced to a fifth of a second. The performance bump can be attributed to the reduction of checks we do to find factors, since now our loop goes from one to the square root approximation of our triangle number and not the number itself. This is a nice improvement and got us the solution of the problem in a very fast manner, but bear in mind that this will not work if our triangle number is a perfect square.
Primes to the rescue
By using prime decomposition we can make our code work for any scenario, perfect squares included. This last optimization will improve the performance of our code in terms of time and will reduce the edge cases to virtually none.
Let us revisit the prime decomposition of the number 28:
28 = 2² × 7
Now let us count the positive divisors (factors) of 28: 1, 2, 4, 7, 14, 28.
Grouping the exponents of the prime decomposition of 28 we end up with 2, 1, the theory states that if we add 1 to each exponent and then multiply them together we will get the total of positive divisors of 28.
28 = 2² × 7
(2 + 1), (1 + 1) = ( 3, 2 ) → 3 × 2 = 6
As you can see we have successfully used prime decomposition to retrieve the amount of factors of 28.
Our new "trick" creates a new problem, we need to do the prime decomposition, which implies that we need to implement an algorithm to find primes. Finding primes is not an easy task but implementing an effective algorithm to find them is very important.
Prime Sieves
Prime numbers and their relevance have been a fascinating subject of study to many mathematicians over time.
Due to their special property of only being divisible by themselves and the unity, finding primes using brute force can be very instruction intensive for our CPU. Just imagine that if we brute force our way to find if 13 is a prime number we have to try at least 11 residue checks; the amount of checks increases linearly for higher numbers.
In order to tackle this issue there are some old algorithms that help us find primes up to a number called prime sieves. According to, Wolfram Alpha, a sieve is a process of successively crossing out members of a list according to a set of rules such that only some remain. A prime sieve works by creating a list of all integers up to a desired limit and progressively removing composite numbers (which it directly generates) until only primes are left.
The sieve of Eratosthenes is one of the best known prime sieves. It's also considered to be one of the simplest to implement in code.
The previous image illustrates how the sieve of Eratosthenes works. The sieve of Eratosthenes along with other factorization techniques, like wheel factorization, reigned the prime decomposition scene up until 2003 when the paper PRIME SIEVES USING BINARY QUADRATIC FORMS by A. O. L. ATKIN AND D. J. BERNSTEIN introduced a faster (but more complicated) sieve to find prime numbers: the sieve of Atkin.
For this example we will use the translation into C++ of the pseudocode described in the paper.
const int max_n = 300; std::vector primes; bool sieve[max_n]; void sieveOfAtkin(){ int N = sqrt(max_n); memset(sieve, 0, sizeof(sieve)); for (int x = 1; x <= N; x++){ for (int y = 1; y <= N; y++){ int n = (4*x*x)+(y*y); if (n <= max_n && (n % 12 == 1 || n % 12 == 5)) sieve[n] ^= true; n = (3*x*x)+(y*y); if (n <= max_n && n % 12 == 7) sieve[n] ^= true; n = (3*x*x)-(y*y); if (x > y && n <= max_n && n % 12 == 11) sieve[n] ^= true; } } sieve[2] = sieve[3] = true; primes.push_back(2); primes.push_back(3); int a; for (a = 5; a <= N; a+=2){ if (sieve[a]){ for (int i = a*a; i < max_n; i += a*a) sieve[i] = false; primes.push_back(a); } } for (; a < max_n; a+=2) if (sieve[a]) primes.push_back(a); }
Following the sieve definition and the flow described in the illustration above this code retrieves all the prime numbers under 300, which is all we need to complete our last optimization of our solution.
The basic idea is to do a "warm up routine" where we store in a container all the prime numbers we are going to use for the residue check of our triangle number. Then we are going to do the prime decomposition and implement our trick to calculate the amount of positive divisors from the exponents of our prime factors.
Our counting implementation looks like this:
int countFactors(int n) { int initial_n = n; int factors = 1; for (int i = 0; primes[i] * primes[i] <= n; ++i) { int power = 0; while (initial_n % primes[i] == 0) { initial_n /= primes[i]; ++power; } factors *= power + 1; } if (initial_n > 1) { factors *= 2; } return factors; }
And our resulting program looks like this:
// Warm up - retrieving all primes under 300 sieveOfAtkin(); for( int i = 1; true; ++i ) { // Obtaining the triangle number int triangle = i * ( i + 1 ) / 2; // Counting the factors int factors = countFactors(triangle); // If factors are bigger than 499 then exit if( factors > 499 ) { std::cout << "Triangle number: " << triangle << " factors: " << factors << std::endl; break; } }
Results:
Triangle number: 76576500
Time: 0.004segs
Conclusion
Knowing how to find prime numbers and how to implement prime factors decomposition is crucial for programmers. Prime numbers are strongly tied to cryptography and other areas of in the computer industry.
The mathematicians who studied prime numbers hundreds of years ago used the knowledge from primes to develop new areas of mathematics, like number theory and knot theory, which are widely used today in programming.
While you might not use prime numbers directly yourself, they are a key part of modern mathematics and have important uses in the era of computers.
0 notes
Text
TIP/Trick: How to use the Paint.NET PSD Plugin to manipulate PSD files from your C# application
Last week I found myself needing to retrieve some information from a Photoshop Document (PSD) file. Since this goes beyond the standard support of the IO .NET library I had to go around some loops and hoops to be able to achieve my goal. Here's how I did it.
Paint.NET PSD Plugin
Being a Paint.NET user I've known about this plugin for a while now and have been using it to manage PSD files without having to install Photoshop. So I chose to use the library's API to manipulate the PSD files with ease and I must say it worked like a charm.
Things you'll need
Paint.NET PSD Plugin. You can download this from the codeplex website.

Paint.NET core dll's. You can retrieve this from your Paint.NET installation folder.
Add a reference to the Photoshop.dll from the Paint.NET PSD Plugin.
Done!
Example code
string[] psds = Directory.GetFiles(@"PATH_GOES_HERE", "*.psd", SearchOption.AllDirectories); foreach (var psd in psds) { var psdFile = new PsdFile(); psdFile.Load(psd); // Loading the psd file // Printing the file name and the file dimensions (width/height) Console.WriteLine("{0}: {1}x{2}", Path.GetFileName(psd), psdFile.ColumnCount, psdFile.RowCount); } Console.ReadKey();
That's all you need to edit or just retrieve info from PSD files. If you need to interact with the file in a more complex manner I would recommend reading through the documentation in order to learn if what you want to do is within the plugin's scope.
0 notes
Text
ASP.NET: Custom LinkedIn OAuth Provider
The release of Visual Studio 2012 brought us a very nice feature. They added OAuth/OpenID support to the default Web Application templates. Now by just adding the external API keys to a config file you can add external account support to your application.
You can watch a short introduction to the feature here or read about it here.
Very cool stuff. However software is never perfect and some of the included providers by default have minor bugs; for instance, there’s a little bug with the Google client regarding the way it retrieves user metadata. A quick fix for it, as recommended by Microsoft in that blog post, is creating a custom client and overriding the method that requests the extra data in order to retrieve what you’re supposed to get.
So why are we here?
Well, while playing with the other providers, such as Facebook and Twitter I didn’t have problems either logging in or retrieving user metadata. BUT! when it came to LinkedIn I faced an interesting issue: the log in didn’t work. I thought it had to do with the keys I was using, but after checking and double-checking I came to the conclusion that it was broken.
I went to the community in search of answers to my issue and found out that others were having the same problems. Digging further into the issue log I found that there’s actually an open issue ticket for this problem.
Before this became a feature I had created a LinkedIn provider with the DotNetOpenAuth library which is the core of the OAuth/OpenID in ASP.NET so I was aware of token related issues when it came to LinkedIn. Since I needed LinkedIn support now and couldn’t wait for an update I went ahead and created a new custom client in order to fix the issue.
Show me tha’ code!
Going through the original library source code I noticed that the default constructor delegated the consumer token management to a SimpleConsumerTokenManager class.
public LinkedInClient(string consumerKey, string consumerSecret, IOAuthTokenManager tokenManager) : base("linkedIn", LinkedInClient.LinkedInServiceDescription, (IConsumerTokenManager) new SimpleConsumerTokenManager(consumerKey, consumerSecret, tokenManager)) { }
I deduced that the problem was here. In a previous effort to add LinkedIn support I had gone through this example and had issues with the tokens as well. The problem is that they have to be persisted. Apparently in a second request the object scope is exceeded, therefore the tokens passed are empty. The quick fix to this was adding the token manager to a session variable.
But the problem here was solved in a different and simpler manner. Watching the source code I noticed that there was a different overload that handled the consumerKey and the secretKey differently. Without the use of any external classes.
public LinkedClient(string consumerKey, string consumerSecret) : base("linkedIn", LinkedInServiceDescription, consumerKey, consumerSecret) { }
So I tested this one by setting it as my default constructor and voilá! Everything worked like a charm. Here’s the resulting code:
public class LinkedInCustomClient : OAuthClient { private static XDocument LoadXDocumentFromStream(Stream stream) { var settings = new XmlReaderSettings { MaxCharactersInDocument = 65536L }; return XDocument.Load(XmlReader.Create(stream, settings)); } /// Describes the OAuth service provider endpoints for LinkedIn. private static readonly ServiceProviderDescription LinkedInServiceDescription = new ServiceProviderDescription { AccessTokenEndpoint = new MessageReceivingEndpoint("https://api.linkedin.com/uas/oauth/accessToken", HttpDeliveryMethods.PostRequest), RequestTokenEndpoint = new MessageReceivingEndpoint("https://api.linkedin.com/uas/oauth/requestToken", HttpDeliveryMethods.PostRequest), UserAuthorizationEndpoint = new MessageReceivingEndpoint("https://www.linkedin.com/uas/oauth/authorize", HttpDeliveryMethods.PostRequest), TamperProtectionElements = new ITamperProtectionChannelBindingElement[] { new HmacSha1SigningBindingElement() }, ProtocolVersion = ProtocolVersion.V10a }; public LinkedInCustomClient(string consumerKey, string consumerSecret) : base("linkedIn", LinkedInServiceDescription, consumerKey, consumerSecret) { } /// Check if authentication succeeded after user is redirected back from the service provider. /// The response token returned from service provider authentication result. [SuppressMessage("Microsoft.Design", "CA1031:DoNotCatchGeneralExceptionTypes", Justification = "We don't care if the request fails.")] protected override AuthenticationResult VerifyAuthenticationCore(AuthorizedTokenResponse response) { // See here for Field Selectors API http://developer.linkedin.com/docs/DOC-1014 const string profileRequestUrl = "https://api.linkedin.com/v1/people/~:(id,first-name,last-name,headline,industry,summary)"; string accessToken = response.AccessToken; var profileEndpoint = new MessageReceivingEndpoint(profileRequestUrl, HttpDeliveryMethods.GetRequest); HttpWebRequest request = WebWorker.PrepareAuthorizedRequest(profileEndpoint, accessToken); try { using (WebResponse profileResponse = request.GetResponse()) { using (Stream responseStream = profileResponse.GetResponseStream()) { XDocument document = LoadXDocumentFromStream(responseStream); string userId = document.Root.Element("id").Value; string firstName = document.Root.Element("first-name").Value; string lastName = document.Root.Element("last-name").Value; string userName = firstName + " " + lastName; var extraData = new Dictionary { {"accesstoken", accessToken}, {"name", userName} }; return new AuthenticationResult( isSuccessful: true, provider: ProviderName, providerUserId: userId, userName: userName, extraData: extraData); } } } catch (Exception exception) { return new AuthenticationResult(exception); } } }
You could create a custom type using XSD to hold the users metadata in a more easy to read and maintain manner. But I’ll let you decide on that, this is how the default client has the metadata retrieval implemented and I decided not to change it.
Now you just need to implement it like this:
OAuthWebSecurity.RegisterClient( new LinkedInCustomClient(consumerKey, secretKey), "LinkedIn", null);
I hope this helps while they fix the issues and update the package. Happy coding!
Note: The library client’s source code can be found here: https://github.com/AArnott/dotnetopenid/tree/master/src/DotNetOpenAuth.AspNet/Clients.
Update (11/12/2013) - [source code repo]
https://github.com/DotNetOpenAuth/DotNetOpenAuth
0 notes
Text
TIP/Trick: How to split a string using C++.
Say you are like me, you have a strong background as a high level language programmer. You enjoy the amenities provided by the strong and mature framework libraries that backup this programming languages. And you’re so used to use the string structure in such a natural way that for you its just another primitive data type.
High level languages like C# or Java provide a strong set of methods that manipulate strings in a large variety of ways. But then, all of the sudden you find yourself programming in low/mid level programming languages like C/C++ where strings are normally managed by dynamically allocated arrays (or pointers) or just plain static arrays of char elements.
If you are like me, when programming in C/C++ you’re going to be trying to find a counterpart for a function or method call among the language provided libraries, just like you used to do on a high level language. So when I tried to split a string into different sub-strings using a delimiter on C++, I did the obvious and checked the standard string documentation looking for a method to achieve this without any success. I do know I could have imported the string.h library from C and work around this issue using the strtok() function but since there’s a lot of people discouraging its use. I also know that third party libraries like Boost provide this functionality but this would create a program dependency of a complete library just for a simple task achievable with tools provided by the core features of the language. Not to mention the mastering of other idioms and structures involved in the use of the function.
You may ask yourself: is this really an issue? why on earth would you want to split a string? Well its a common practice to concatenate different data tokens using a special character as a delimiter when sending data over the network. This way you can summarize all the data that comprises a network package into just one string instead of being forced to send the different data subsets in different packages. Remember resources are not infinite and this way we are doing more with less operations.
A common example of this is CSV (Comma separated values) which is a common data formatting standard for interchangeability purposes; since matrix elements are commonly separated by commas when represented as data structures in programming languages, its a natural way of comprising data packages and it makes very easy to implicitly understand the package content depending the context. Usually comma separated values look like this:
“Raul”,27,”Santo Domingo”
56000, 256, “This is a text message”
Note that there’s no specific order or rule for delimiting what goes first or last, this is delegated to the programmer criteria and may variate for every program that generates this kind of data. There are also other common practices for data representation using other delimiters like tab for example.
So the need of manipulating each data subset individually materializes on a split function. The basic signature of a split function is: string array split(string of characters, char delimiter). We could imply that we are invoking a function that will return a data structure comprised by data tokens which were originally separated by a delimiter. So if the original string was “This is my, original string” then the resulting array would be {{ “This is my”}, { “original string” }}.
The Code
In order to achieve this in C++ I created a simple function that returns a string vector (which is an array like data structure). The source code for this function definition goes like this:
vector<string> split(string str, string delim) { unsigned start = 0; unsigned end; vector<string> v; while( (end = str.find(delim, start)) != string::npos ) { v.push_back(str.substr(start, end-start)); start = end + delim.length(); } v.push_back(str.substr(start)); return v; }
The implementation for the client code is as pretty simple as vector<string> v = split(str, “,”);.
So that’s it there’s a simple way of doing strings split based on a delimiter. Enjoy!
7 notes
·
View notes