#gooddesignpractices
Explore tagged Tumblr posts
Text
Improve your web/app design by incorporating fundamental UX principles! Put the needs of the user first, be consistent, create a hierarchy, and carry out accessibility testing. Make thoughtful typographic choices, give usability top priority, embrace minimalism, and use storytelling. Add animations to make the experience more appealing and to improve user control.
#uiuxdevelopment#uxdesignprinciples#elementofuxdesign#uxdesigncomponents#gooddesignpractices#principlesofuxdesign#uxprinciples#designprinciplesux#userexperiencedesignprinciples#uxdesignbestpractices#uiuxdesignprinciples#designprinciplesuiux#uiuxdesigners
0 notes
Text
DNS Propagation
Have you ever updated your domain’s A record and noticed that, for at least several hours, your new domain displayed the new site on one device (such as your smartphone), but the old site on another device, such as your home computer? Have you ever updated your domain’s MX records and found that, for at least several hours, not all new emails were delivered to the new email server you specified? I cannot count the number of times I have seen these sorts of situations cause website owners to panic, pull their hair out, or get frustrated with their hosting provider. So what exactly is going on, and what can you do about it? What is happening is that the change you made to your domain’s DNS is propagating throughout the internet. In what follows, I will explain what DNS propagation is, and ways that you can reduce propagation times so that your changes update faster. What is DNS Propagation? “Propagation” is a term with several related meanings, but here it simply means the spreading of something from one thing to another. DNS was devised to be decentralized, so that there is no single, massive file that everyone needs to continuously download in order to have up-to-date records of which domain resolves to which IP. A natural consequence of this decentralized system is that any DNS changes would need to propagate or spread, to other systems in order for the rest of the internet to see those changes. This is a process that requires time. Fortunately, you do have control over some of that time. One of the steps of the DNS resolution process is when your ISP (Internet Service Provider) caches, or stores, the looked-up record for a certain period of time. This is done so that the next time that record is requested it can be given automatically, which speeds things up on your end and reduces traffic on the ISP’s end. When you’ve made a change to your domain’s DNS, any nameservers (such as those belonging to your ISP) that have already stored that record in its cache will continue serving it until the record has expired and it has to request an update. That is why on certain networks it can take hours or even days for a DNS change to be seen, while on others it is immediate: one network has a cached result, and one does not. Fortunately, the length of time that caches are stored before being updated can be determined by you, provided that you have access to edit the TTL, or Time to Live, a field of a given DNS record. Doing so is quite straightforward. How Long Will it Take? You will notice that each record has a TTL field containing a large number. This number is simply time in seconds. A TTL of 14400 means that any nameservers caching results for that record will do so for 14400 seconds, or 4 hours. After 4 hours, the cached record will expire and those nameservers will request an update from your DNS zone. In general, a TTL value of 14400 is perfectly adequate for anyone’s needs. Lowering that value will only increase the burden on your website’s nameservers by causing it to respond with a greater frequency to any other nameservers who are caching your domain’s records. But if you are, for example, migrating your website, or you want to change a DNS record for some other reason, then temporarily lowering the TTL value of certain records not only makes sense but can be beneficial to you. The one caveat that you have to keep in mind before doing so is that you need to plan ahead. So, let’s suppose that I want to change an A record for blog.example.org to some other IP, and I want that record change to propagate as quickly as possible, minimizing the effects of longer record caching. Because that A record’s current TTL is 14400, or 4 hours, I first need to lower it to, say, 300, or 5 minutes, and then wait for at least 4 hours. This is to give any caching nameservers enough time to expire my record and request a new one with its new TTL value. Once I have done that, I can change the A record to a new IP, and after 5 minutes that change should have propagated to every nameserver caching my DNS records. Read the full article
#DesignPractices#effectivedesign#efficientwebdesign#GoodDesign#GoodDesignPractices#goodwebdesign#hosting#servermanagement#webdesign#webdevelopment
0 notes
Text
Choosing NGINX for growth with Wordpress
Do it Because They Do
WordPress.com is the cloud version of WordPress that is hosted and supported by Automattic. WordPress.com serves more than 33 million sites attracting over 339 million people and 3.4 billion pages each month. Since April 2008, WordPress.com has experienced about 4.4 times growth in page views. WordPress.com VIP hosts many popular sites including CNN’s Political Ticker, NFL, Time Inc’s The Page, People Magazine’s Style Watch, corporate blogs for Flickr and KROQ, and many more. Automattic operates two thousand servers in twelve, globally distributed, data centers. WordPress.com customer data is instantly replicated between different locations to provide an extremely reliable and fast web experience for hundreds of millions of visitors. Problem WordPress.com, which began in 2005, started on shared hosting, much like all of the WordPress.org sites. It was soon moved to a single dedicated server and then to two servers. In late 2005, WordPress.com opened to the public and by early 2006 had expanded to four web servers, with traffic being distributed using round robin DNS. Soon thereafter WordPress.com expanded to a second data center and then to a third. It quickly became apparent that round robin DNS wasn’t a viable long-term solution. While hardware appliances like F5 BIG-IP’s offered many features that WordPress.com required, Automattic decided to evaluate different options built on existing open source software. Using open source software on commodity hardware provides the ultimate level of flexibility and also comes with a cost savings “Purchasing a pair of capable hardware appliances in a failover configuration for a single datacenter may be a little expensive, but purchasing and servicing 10 sets for 10 data centers soon becomes very expensive.” At first, the WordPress.com team chose Pound as a software load balancer because of its ease of use and built-in SSL support. After using Pound for about two years, WordPress.com required additional functionality and scalability, namely: On-the-fly reconfiguration capabilities, without interrupting live traffic. Better health check mechanisms, allowing to smoothly and gradually recover from a backend failure, without overloading application infrastructure with unexpected load of requests. Better scalability both requests per second, and the number of concurrent connections. Pound’s thread-based model wasn’t able to reliably handle over 1,000 requests per second per load balancing instance. Solution In April 2008 Automattic converted all WordPress.com load balancers from Pound to NGINX. Before that Automattic engineers had been using NGINX for Gravatar for a few months and were impressed by its performance and scalability, so moving WordPress.com over was the natural next step. Before switching WordPress.com to NGINX, Automattic evaluated several other products, including HAProxy, and LVS. Here are some of the reasons why NGINX was chosen: Easy, flexible and logical configuration. Ability to reconfigure and upgrade NGINX instances on-the-fly, without dropping user requests. Application request routing via FastCGI, uwsgi or SCGI protocols; NGINX can also serve static content directly from storage for additional performance optimization. The only software tested that was capable of reliably handling over 10,000 request per second of live traffic to WordPress applications from a single server. NGINX’s memory and CPU footprints are minimal, and predictable. After switching to NGINX the CPU usage on the load balancing servers dropped three times. Today in 2012, WordPress.com is serving an average of 70,000 req/sec and over 15 Gbit/sec of traffic from its 36 NGINX powered load balancers, with plenty of room to grow. Most of NGINX load balancers serve about 5,000 req/sec, sometimes peaking to 20,000 req/s, and have about 50,000 established connections. Typical hardware configuration is Dual Xeon 5620 8 core CPUs with hyper-threading, 8-12GB of RAM, running Debian Linux 6.0. As part of high availability setup WordPress.com previously used Wackamole/Spread but has recently started to migrate to Keepalived. Even distribution of inbound requests across NGINX-based web acceleration and load balancing layer is based on DNS round-robin mechanism. Following a successful deployment of NGINX as web acceleration, load balancing and traffic management solution, WordPress.com recently completed migration from Litespeed to NGINX across all application backend servers. NGINX combined with the FastCGI Process Manager (FPM) for PHP allows greater control, easier configuration, and no additional maintenance overhead for the 5-member Automattic Systems Team. Read the full article
#effectivedesign#GoodDesignPractices#goodwebdesign#linux#multi-site#php#speeduptheweb#webdesign#webdevelopment#wordpress
0 notes
Text
Why Responsive Web Design Is So Important
Mobile internet browsing is on the rise people! As of this posting the number of people browsing the internet from a mobile device is up to 18% vs. a declining 82% browsing from a desktop computer. These numbers are up 6% since this time last year. ~ http://gs.statcounter.com/#mobile_vs_desktop-ww-monthly-201209-201309 What does this tell us? Well, plain and simple, as time goes on and mobile devices keep improving we will see these numbers reverse... meaning soon enough it will be 82% browsing from mobile! How does this relate to web design? It means that we as web developers really need to take a step back and start planning more for the future of our clients and their online presence(s). Even if it means we have to take a hit to our wallets, there is no reason anymore to delay the inevitable outcome. We need to start planning and designing from the small screen smart phones up instead of the other way around. Smartphones and tablets have changed the approach toward design and user experience. Before the proliferation of mobile devices with advanced web-browsing capability , web designers had only one primary challenge to deal with - keeping the same look and feel of their websites in various desktop computer browsers. However, interacting with websites on smartphones and tablets is not the same as doing that on a desktop computer monitors. Factors such as Click versus Touch, Screen-size, Pixel-resolution, support for Adobe's Flash technology, optimized markup and many more have become crucial while creating websites with Responsive Design But, why is responsive design so important for your website? Before we understand that, we must understand what is "Responsive Web Design". Responsive Web Design by definition is a web design approach aimed at crafting web sites to provide an optimal viewing experience, easy reading, and navigation across a wide variety of devices from mobile phones to desktop computers. So, Why is it So Important? Time & Money The notion that making a responsive website is expensive is just that, a notion. The fact is, while the cost to make a responsive website is somewhat more than making a conventional website, but the expenses to duplicate a website for mobile and other devices gets completely eliminated, as a result - that cuts total development costs, significantly. In addition to that, a responsive design cuts the total ownership cost, by means of taking away the effort to maintain different versions of a website i.e. a "desktop-version", a "mobile-version". Thus, in the long term, investing in responsive website design is the smartest decision. Pervasion of the Mobile Devices Internet traffic originating from mobile devices is rising exponentially each day. As more and more people get used to browsing the web through their smartphones and tablets, it is foolhardy for any website publisher to ignore responsive web design. The "One Site Fits All Devices" approach soon will be the norm. User experience While, content is king and discoverability of content are foremost success metrics, it is the user experience that enables visitors to consume content on any website through the device of their choice and preference, anytime. Thus, responsive web design is about providing the optimal user experience irrespective of whether they use a desktop computer, a smartphone, a tablet or a smart-TV. Device Agnostic Responsive Websites are agnostic to devices and their operating systems. A responsive web design ensures that users get the best and consistent experience of a website on any device of the user's choice and preference - be that the iPhone, the iPad, the smartphones running the Android OS, or the Windows OS and several others. As a result website owners and content publishers can need not exercise the option to build versions of their website for every popular device platform which they expect their audience might be using. The way ahead Thus, rather than compartmentalizing website content into disparate, device-specific experiences, it is smarter to adopt the responsive web design approach. That’s not to say there isn't a business case for separate sites geared toward specific devices; for example, if the user-goals for your mobile content-offering are limited in scope than its desktop equivalent, then serving different content to each might be the best approach. But that kind of design-thinking does not have to be our default. Now more than ever, digital content is meant to be viewed on a spectrum of different experiences. Responsive web design offers the way forward. Read the full article
#DesignPractices#effectivedesign#efficientwebdesign#GoodDesign#GoodDesignPractices#goodwebdesign#webdesign#webdevelopment
0 notes
Text
Simple PDO Wrapper Class and Functions

Hey folks, since PDO is taking over, I figured it was prime time for me to jump the bandwagon of direct db access, and take the plunge into PDO. As a result, I have built myself a nice and simple PDO Wrapper class and some extra functions to do all the work that one would need to do against a MySQL database. So we are going to split this up into the 2 files I have setup for my testing and environment, all are commented, and if you 'do not get it', well, maybe you should seek other hand holders to guide you through the basics of programming for the web ;-P Without any further ado:
db.class.php
db.functioning.php
Now, by all means, if you can make this better, leave me some comments with your suggestions, and as I figure out better ways to do this, I will post them here. Have fun coding! ~Kevin Read the full article
0 notes
Text
Quick Way to Speed up Your ModX Site

Hey folks time for another quick article to get your ModX site running tip-top. This one will allow you to 'prefetch' your sites pages for a faster browsing experience. Now ModX has a pretty fantastic caching mechanism already built in, but I find that if your site has a lot of pages, sometimes that can take a bit on the initial load. This will take care of that issue. First and fore-most you will need to make sure to include the latest jQuery library in your templates. This can be found at: jquery.com Next you will need to create a php file inside your 'wp-content' folder. I usually create a few default folders inside this... one happens to be called 'php' Once you have the file created, remember what the file is, and add this code to it: Now, once you've got this page created, insert the following tag at the bottom of your template. You should already be including your script files down here anyways (see here) As you can see, I've named mine 'prefetched.php', just rename this file and path to where you have yours located at. Using FireBug in FireFox you can see these pages getting 'fetched', and when you browse to them you'll notice that they load a bit faster than they did before, this is because they are now primed in your browsers cache. UPDATE: You can also help this along by using HTML5's built-in prefetcher! Create a snippet with the following code: Read the full article
#DesignPractices#efficientwebdesign#GoodDesignPractices#goodwebdesign#modx#modxefficiency#php#speeduptheweb#webdevelopment
0 notes
Text
Setup SVN With Attached Drives on CentOS
These are the step that I took to create a SVN server using CentOS 7, while attaching drives as repositories instead of creating a monstrous system drive and importing everything there. If you follow to a "T", you too can have the play-ground I have =) I will lay out my exact steps, including creating the virtual machine I used for this. If you are currently using a VM or dedicated machine then you can skip those steps. I imagine this would work on CentOS 5 and up, but don't quote me on that. 1. i. Install CentOS as Basic Web Server, no GUI ii. Run yum update f. Run the following 3 to allow ports 80 and 443 through the firewall ii. firewall-cmd –reload i. Run the following ii. chown –R apache:apache /svn Now we need to configure SVN ii. Now we need to setup user access a. i. Type in the new password twice 3. i. n iv. mkfs.ext4 /dev/sdb1 (sdc1, sdd1 as well) b. mkdir /mnt/Disk1 (also for Disk2 and Disk3) iii. /dev/sdb1 /mnt/Disk1 ext4 defaults 0 0 vi. Now we add the proper permissions chown -R apache:apache /mnt/Disk1 2. Now create & import the repos (replace Disk* with each disk you mounted, or whatever you want to name it) Copy all content from the disk to the ‘trunk’ directory we just created iii. Read the full article
0 notes