Don't wanna be here? Send us removal request.
Text
$300 Cash Giveaway and 2 Set Ups!
We wanted to change this up as we saw a good amount of people enjoy the contest where we gave away actual cash. Obviously, we felt the need to do it all over again; we know there are some people out there that are just a little hard up for cash. There are terms to this contest which can be found on the bottom of the main page.
$300 in Money Order/Cash
Pick your mod and tank or starter kit of your choice (not over $100 total)
Pick your mod and tank or starter kit of your choice (not over $75 total)
Pick your mod and tank or starter kit of your choice (not over $50 total)
206 notes
·
View notes
Text
Authentic Vape Gear Contest
Due to the new laws and regulations we have to charge for this contest. Its $1.00 and its the mandatory option to get in. The system will choose the winners at random. Obviously, the more points you put in the more chances you have of winning. We figured $1 would be enough as its not a big requirement but it allows us to hold the contest.
Authentic Rose3 RTA & 2 VapeL1FE Liquids
*Choice Therion, Triade or SMY 75
*Subject to availability
https://vapel1fe.com/pages/contest
83 notes
·
View notes
Photo

Von Amper offers premium handmade ties, fine leather bags, briefcases and accessories. Handmade with attention to detail and artful craftsmanship tradition. Only timelessly elegant products for the modern gentleman. www.vonamper.de
25 notes
·
View notes
Text
Example Post Title
This text is controlled by the user, they can use it to articulate to their followers how to enter, or whatever they want :) Oh, they can also use HTML if they want.
Tumblr Demo Competition
0 notes
Text
Enter this awesome competition
Test Gleam Competitions
Test Competition
0 notes
Text
Test
Test <a href="https://blah.com">Blah</a>
Test Competition
0 notes
Photo

Second rack at RamNode Atlanta.
On that note we’re running a competition to win one of 10 RamNode SSD VPS this month:
11 notes
·
View notes
Text
ServerBear Month 1 Update - Results & Stats
At the moment ServerBear is a website, we're hoping that eventually we can build it into a business. It's something that both myself (@thegyppo) & my co-founder John (@ponny) are passionate about, we also think there's a huge gap in the marketplace for what we're doing.
We've both made a conscious decision that the best way to run our site is through openness & transparency, which is why we've decided to reveal our data every month here on the blog, so you can see what we're doing, how we're performing & whether or not we're failing or succeeding (hopefully not the former!).
For the record, ServerBear is not our fulltime gig. We both work full time jobs & our startup (Crowd9) has a number of other websites that we maintain. These generate income for us, but we don't have the same opportunity to make an impact like we do with ServerBear.
With ServerBear we have to try & get the point where we reach a critical mass. We want to be the goto place that people trust to find accurate data about web hosting. This is not an easy task, but we are approaching this from a different perspective. We feel that we need to build something that is primarily useful for web hosts, we want you to want to work with us & we will do whatever it takes to add as much value as we can, we have the power to make people think twice about your company if you provide a better level of performance & quality than a big name hosting company (and we'll mention a few of these host that we've worked with this month).
We also have to make a living, the world is becoming more expensive, both John & I both have familes, kids & mortgages. We need to make a serious impact if we're ever to both consider working on ServerBear full time.
Traffic Stats
We launched officially on 24th July 2012, the site itself has been live from 22nd June 2012 but we've been iterating based on feedback.
In total we've had 5,561 visits since launch, which has resulted in over 4,500+ visits to web hosts (which is an incredible click through rate, we do suspect that search engine bots might be inflating our internal numbers).
You'll see that we're getting close to no search engine traffic yet, we probably don't have enough links for the Google gods to trust us. Konstantin from VPSeer kindly linked to us from his VPS SSD Performance Chart last week (Thanks!), as with all good products we hope that these types of links will just happen naturally over time.
Low End Talk: I stumbled across this awesome community whilst doing some research, there's a lot of smart people here who run their own hosting companies (a few I've spoken with: Tim from Hostigation, Nick from RAMNode, Elliot from Virt.io, Kevin from URPad, Simon from MiniVPS, Corey from Front Range Hosting, Jacob from EaseVPS, Joe from Secure Dragon & Jack from DotVPS) - they have provided great support & have almost adopted us as their standard test when launching new offers or plans. We really need to continue to provide value to communities, especially those that care about the performance of their servers.
Web Hosting Talk: This forum is where the industry hangs out, it's been a great way to network with web hosts & answer questions. Again we've been trying to provide value in every post here, but WHT (as it's known for short) is similar to Digital Point in that there's a lot of uneducated people that just reduce the quality of this forum to mush. You'll see more posts about people complaining about downtime, or too many pointless filler posts from hosts trying to inflate their post count.
Hacker News: One of John's good friends posted the site to Hacker News, and whilst we got only about 8 up-votes I was quite surprised about the amount of traffic it sent us. I think we could have done more justice with the title of the post, but alas we weren't in control of posting it. We also didn't get much feedback from this particular set of users, so we probably need a different angle like (showing the web hosting choices of the latest YC batch).
Twitter: We've been getting into Twitter a lot lately, it's been useful to announce when we add a new host & benchmarks. We're getting good traction from hosts also retweeting us to their followers when we give them some nice data. Here's a good example of Joyent retweeting us after we benchmarked their plans (to over 9k followers). Don't forget to follow us if you use Twitter :)
Site Stats
We've been working pretty hard, mostly in the evenings & weekends to improve our stats. For me the single most important metric is new benchmarks, the more coverage we have the more & more valuable our dataset becomes - which in turn to should to authority within the industry (or so we hope):
456 New Benchmarks
An 85% success rate contacting web hosts & asking to provide benchmarks (or an account for us to run benchmarks). To me this really validates that we're providing some sort of value.
62 Hosts have contacted us directly via our contact form
143 New Hosts Added (2.38 per day)
1089 Hosting Plans Added (18 per day)
13% of the links to hosts are currently affiliated (i.e have the chance to earn us revenue)
80k+ Lines of Plan Data (This includes RAM/HDD Size/CPU Cores, all the data that will make it easier for you to filter by a specific need)
71 Hosting Coupons (quite a bit less than we'd like, we're planning to improve this part of the site)
79 Tweets & 43 Twitter followers (we only started this on 8th Aug)
Here's some interesting stats for the month based on the most popular plans:
Most Benchmarked Plan: Linode 512
Most Popular Benchmark: OVH Free Beta BHS
Most Popular VPS Host: Hostigation
Most Popular Dedicated Host: Secured Servers
Most Popular Cloud Host: Joyent
Most Popular Plan: Wizz:160 from WizzVPS
Most Popular Category: VPS (followed by Low End Boxes)
Most Popular Blog Post: Our Amazon High IO Benchmarks
Affiliate Signups
We're tracking how many signups we send hosts that we're affiliated with, we eventually want to automate this. Our average conversion rate is about 5.5% which means we should have sent ~100 sales in this first launch month if we extrapolate it out to our traffic:
Site5 - 3 Signups
RamNode - 3 Signups
GridVirt - 2 Signups
BlueVM - 1 Signup
Hostigation - 5 Signups
URPad - 1 Signup
PhotonVPS - 2 Signups
Eleven2 - 3 Signups
We really have concentrated on signing up to affiliate programs, we also don't have deep linking integrated - so we're not currently getting credited when we link directly to the plan signup link in WHMCS. Remember that if a host has an affiliate program we will sign up to it (it's a invisible way for us to generate revenue), but this does not impact any of our reports. We only report on benchmarked data.
100 Signups roughly equates to $3k or so in monthly revenue for us (depending on which host & whether it's a once off or annual commission). We don't have enough data to really dig into this number but from my calculations we need to be aiming for 1k+ signups (which is around 50k visits/month at our current conversion rates).
Our Technology Stack & Tools
Our setup is quite unique, ServerBear is a whitelabeled frontend that we've built in Ruby, it's extremely flexible & customisable. If we ever wanted to scale out a similar site with a different theme we can do it easily.
We don't actually store our data on ServerBear, we have a management platform that acts as our data storage engine (it's also developed in Ruby). We can sync data from the data management platform to ServerBear, the advantage of this is that the data inside ServerBear is static - which allows us to do lots of cool caching tricks to make the site super fast.
Our technology stack looks something like this:
Dual E5506 with 24GB of RAM
RAID10 HDD's
Percona build of MYSQL
Memcache
Ruby on Rails
Nginx
Coffeescript (for filtering)
Mailgun for SMTP email
Bootstrap (we've modified bootstrap for our theme/skin)
Node.js (for our new Uptime Benchmarks)
MongoDB (for our new Uptime Benchmarks)
Amazon AWS backups
Secondary Dropbox backups
Third redundant backup server
Scoutapp for application monitoring
Github
Tumblr (for the blog)
Bufferapp (for scheduling Tweets in US time)
Google Analytics
Development & New Features
We've been working hard on development (John is dedicating 1 full day per week), since we've launched we've added a fair amount of new features:
Benchmark Reports: When we launched we were only sending a .txt benchmark attachment, we still do this (although I'm not entirely happy with the format). We now create a comprehensive report that gets emailed for every benchmark that you run.
IO Benchmarks: We originally only launched with UnixBench & Network speed. We've added dd, IOPS & yesterday we just pushed the first version of FIO (Sequential Read/Write IOPS).
Filtering: We got a lot of feedback that we need to allow filtering, on any of our comparison pages (like KVM VPS) you'll be able to filter by whatever your requirements are. Need a certain amount of RAM, only have a specific budget, or need the server in a certain location? We've got it all covered.
Plan Info Tooltip: We had a lot of complaints that clicking a plan was taking you to the host every time. So now we provide some useful information, this is an area we want to improve even further.
New Categories: I've been working on building out new categories for filtering. Here's what we've added in the last 2 months (please suggest more categories below):
VPS - Benchmarks, SSD Benchmarks, Low End Boxes (Under $7), Cheap VPS (Under $12), Budget VPS (Under $20), KVM, OpenVZ, Xen, SSD, Unmetered.
Dedicated - Benchmarks
Cloud - Benchmarks, Australian Cloud Provider Benchmarks
Distro Support: We've added support for Debian, CentOS (plus fixed bugs with the script failing in CentOS 5.x) & Red Hat.
New Homepage: We've added more depth to the homepage with the Top & Best Value UnixBench/IO Benchmarks. We'll be improving this further in the next month.
More Metrics: We have a tonne of metrics assigned to each plan that we don't show yet like Uplink (10/100/1Gbps), CPU Cores, Processor Type, Burst RAM & a load more.
Column Filtering: Click on a column to sort by that metric, a great way to sort anything ascending or decending.
Help Tooltips: Need help understanding what a particular metric is? We've added some help tooltips to show you.
We've still got a lot to do, we've validated the idea enough that we feel like we can continue to add value. Here's some of the stuff we've got planned:
Uptime Benchmarks: We're pretty far into this project at the moment & hope to have something rolled out in September. We currently have 10 redundant nodes which will monitor the uptime of multiple servers per datacenter per host (where possible). Reported downtime from 3 or more nodes to 1 or more servers will classify a particular host as down. We'll have a leaderboard just like we have for everything else that will go into detail of outages.
Plan Information Pages: We're building out individual pages for each hosting plan, this will provide more detail that ever before including benchmarks, locations, CPUs per server & location, uptime & heaps more.
Location Information: We assign GEOIP information to benchmarks, we'll make it so you can see how plans benchmarked in different locations (so you know which location to choose).
Public Reports: At the moment you can't see the reports for most of the benchmarks on the site, we summarise the data. Eventually we'll allow reports to be public or private, if it's public it'll be browsable on the site.
WHMCS Plugin: We're toying with the idea of a WHMCS plugin that'll allow hosts to add uptime nodes & see their current status via WHMCS (plus a number of other features).
Partial Benchmarks: Were getting feedback that sometimes users don't want to run the full benchmark, we'll be rolling out the ability to choose particular parts of the benchmark eventually. if you just want to run a quick IO test or network test for example.
Wrapping Up
Without you guys giving us feedback & using the site then we're just a fancy application with data. So from both John & myself we thank you for using us & giving us a reason to keep making ServerBear better.
Hosts: Benchmark your plans, share your reports with potential customers. Link to your reports from your website.
Users: Benchmark your current host, see how it stacks up. Don't settle for poor performance!
Benchmark Your Host
0 notes
Text
Web Hosts Should Have Redundant Websites
I've been noticing a worrying trend lately that whenever a web host goes down, due to network issues or hardware issues their website tends to go down with it.
In many safety-critical systems, such as fly-by-wire and hydraulic systems in aircraft, some parts of the control system may be triplicated. An error in one component may then be out-voted by the other two. In a triply redundant system, the system has three sub components, all three of which must fail before the system fails.
This only adds fuel to the fire, if your website or blog is down you have no way to communicate issues to your clients & the longer you take the more agitated they will get.
As a web host you should always consider hosting your main site outside of your network. I know it might feel shameful, but by spreading yourself across multiple networks you reduce the probability that multiple systems will be down simultaneously.
You can always use subdomains to your advantage too, if you have a status.hostname.com to deliver network status updates to customers then it's easy to ensure this is on another network with failover (this particular subdomain is critical to keeping customers in the loop).
If All Else Fails?
If all else fails then you have a few final options to keep your customers informed:
Use a service like Mailgun for your email. You can send an email to customers informing them of any outages.
Be active in social networks like Twitter & Facebook, encourage users to follow you for network updates.
Use forums to let your customers & peers know that you are experiencing issues.
If you stay silent customers will automatically assume the worst - that you've gone out of business. You could lose customers who panic, forum posts will crop up & you'll do more harm; all because you decided against hosting your assets with competitors.
1 note
·
View note
Text
Amazon High I/O Quadruple Extra Large Benchmarks
Amazon announced today the release of a new EC2 instance type targeted at high I/O dependant applications. We've run benchmarks on a lot of the Amazon plans in the past & found them to be rather lacklustre in terms of performance.
High I/O Quadruple Extra Large Instance Specs (hi1.4xlarge)
60.5 GB of memory
35 EC2 Compute Units (8 virtual cores with 4.4 EC2 Compute Units each)
CPUs: Intel Xeon E5620
2 SSD-based volumes each with 1024 GB of instance storage
Uplink: 10 Gigabit Ethernet
IOPS
Firstly lets run with Cached IO. From previous SSD testing we should expect this upwards of the 800mb/s mark.
ioping . -c 10 -C 4096 bytes from . (ext3 /dev/xvdf): request=1 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=2 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=3 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=4 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=5 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=6 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=7 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=8 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=9 time=0.0 ms 4096 bytes from . (ext3 /dev/xvdf): request=10 time=0.0 ms --- . (ext3 /dev/xvdf) ioping statistics --- 10 requests completed in 9000.9 ms, 217391 iops, 849.2 mb/s min/avg/max/mdev = 0.0/0.0/0.0/0.0 ms
Then Direct IO. This number is actually comparable to some of the RAMNode SSD plans.
ioping . -c 10 -D 4096 bytes from . (ext3 /dev/xvdf): request=1 time=0.2 ms 4096 bytes from . (ext3 /dev/xvdf): request=2 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=3 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=4 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=5 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=6 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=7 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=8 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=9 time=0.3 ms 4096 bytes from . (ext3 /dev/xvdf): request=10 time=0.3 ms --- . (ext3 /dev/xvdf) ioping statistics --- 10 requests completed in 9003.5 ms, 3700 iops, 14.5 mb/s min/avg/max/mdev = 0.2/0.3/0.3/0.0 ms
Then a seek rate test.
ioping -R /dev/xvdf --- /dev/xvdf (device 1024.0 Gb) ioping statistics --- 12296 requests completed in 3000.2 ms, 7817 iops, 30.5 mb/s min/avg/max/mdev = 0.1/0.1/0.4/0.1 ms
Now we'll test the sequential writes
--- . (ext3 /dev/xvdf) ioping statistics --- 3377 requests completed in 3000.4 ms, 1344 iops, 336.0 mb/s min/avg/max/mdev = 0.6/0.7/2.6/0.2 ms
DD
dd if=/dev/zero of=test bs=1M count=1k oflag=dsync 1073741824 bytes (1.1 GB) copied, 3.75235 s, 286 MB/s
dd if=/dev/zero of=test bs=64k count=16k oflag=dsync 1073741824 bytes (1.1 GB) copied, 13.4007 s, 80.1 MB/s
dd if=/dev/zero of=test bs=1M count=1k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 3.25754 s, 330 MB/s
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 3.3367 s, 322 MB/s
There seems to be an issue with disk performance in dsync mode, we see this often with SSD drives. We we really need to use a bigger file to test the sustained performance.
dd if=/dev/xvdf of=test bs=1M 7646380032 bytes (7.6 GB) copied, 17.4732 s, 438 MB/s
hdparm -tT /dev/xvdf /dev/xvdf: Timing cached reads: 14502 MB in 1.99 seconds = 7289.54 MB/sec Timing buffered disk reads: 1142 MB in 3.00 seconds = 380.53 MB/sec
UnixBench
------------------------------------------------------------------------ Benchmark Run: Fri Jul 20 2012 02:40:24 - 03:08:35 16 CPUs in system; running 1 parallel copy of tests Dhrystone 2 using register variables 24160326.4 lps (10.0 s, 7 samples) Double-Precision Whetstone 2989.9 MWIPS (10.0 s, 7 samples) Execl Throughput 986.9 lps (30.0 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 271531.6 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 70013.0 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 802038.9 KBps (30.0 s, 2 samples) Pipe Throughput 410363.6 lps (10.0 s, 7 samples) Pipe-based Context Switching 44733.0 lps (10.0 s, 7 samples) Process Creation 2211.5 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 3919.1 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 1538.9 lpm (60.0 s, 2 samples) System Call Overhead 441575.6 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 24160326.4 2070.3 Double-Precision Whetstone 55.0 2989.9 543.6 Execl Throughput 43.0 986.9 229.5 File Copy 1024 bufsize 2000 maxblocks 3960.0 271531.6 685.7 File Copy 256 bufsize 500 maxblocks 1655.0 70013.0 423.0 File Copy 4096 bufsize 8000 maxblocks 5800.0 802038.9 1382.8 Pipe Throughput 12440.0 410363.6 329.9 Pipe-based Context Switching 4000.0 44733.0 111.8 Process Creation 126.0 2211.5 175.5 Shell Scripts (1 concurrent) 42.4 3919.1 924.3 Shell Scripts (8 concurrent) 6.0 1538.9 2564.9 System Call Overhead 15000.0 441575.6 294.4 ======== System Benchmarks Index Score 527.9 ------------------------------------------------------------------------ Benchmark Run: Fri Jul 20 2012 03:08:35 - 03:36:59 16 CPUs in system; running 16 parallel copies of tests Dhrystone 2 using register variables 201313595.2 lps (10.0 s, 7 samples) Double-Precision Whetstone 40724.6 MWIPS (10.1 s, 7 samples) Execl Throughput 8050.6 lps (29.9 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 348105.5 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 95790.4 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 1189159.8 KBps (30.0 s, 2 samples) Pipe Throughput 4705588.4 lps (10.0 s, 7 samples) Pipe-based Context Switching 634652.9 lps (10.0 s, 7 samples) Process Creation 14990.6 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 21494.0 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2880.1 lpm (60.2 s, 2 samples) System Call Overhead 4191761.5 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 201313595.2 17250.5 Double-Precision Whetstone 55.0 40724.6 7404.5 Execl Throughput 43.0 8050.6 1872.2 File Copy 1024 bufsize 2000 maxblocks 3960.0 348105.5 879.1 File Copy 256 bufsize 500 maxblocks 1655.0 95790.4 578.8 File Copy 4096 bufsize 8000 maxblocks 5800.0 1189159.8 2050.3 Pipe Throughput 12440.0 4705588.4 3782.6 Pipe-based Context Switching 4000.0 634652.9 1586.6 Process Creation 126.0 14990.6 1189.7 Shell Scripts (1 concurrent) 42.4 21494.0 5069.3 Shell Scripts (8 concurrent) 6.0 2880.1 4800.1 System Call Overhead 15000.0 4191761.5 2794.5 ======== System Benchmarks Index Score 2652.2
The server seems to struggle in UnixBench with the low buffer size file copy tests, but excels in the Shell Script tests. The CPU is a Intel Xeon E5620 which is disappointing, it seems like it could be really holding this particular instance back when you consider some of the higher end CPU's (E5 Series) benchmark up to 400x better.
Price
Running this instance is by no means cheap, at $3.10/hour it'll set you back almost $2,400 per month. However for high IO intensive applications that are currently using a large amount of EC2 instances it may be worthwhile downscaling to a smaller number of the hi1.4xlarge instances. You can see some interesting calculations from Netflix earlier today on their savings by moving to the high i/o instances.
0 notes
Text
Installing & Running UnixBench on Ubuntu
UnixBench is part of core benchmarking methodology here at ServerBear, it provides a performance overview at a system level by testing a number of various factors:
Dhrystone
Whetstone
Execl Throughput
File Copy
Pipe Throughput
Pipe-based Context Switching
Process Creation
Shell Scripts
System Call Overhead
Graphical Tests
At the end you'll get an overall system score for a single process utilising 1 CPU Core or parallel processes utlising all the CPU Cores in your system. If you want an idea on how your server compares to other hosts you can check our server benchmarks.
There's two ways you can get UnixBench running on your Ubuntu system:
Compile It Yourself
In order to compile Unixbench yourself you will need to make sure you have the right packages installed in Ubuntu:
apt-get install libx11-dev libgl1-mesa-dev libxext-dev perl perl-modules make
Once everything is installed you can then grab the latest UnixBench version (5.1.3) from Google Code, the following command will also run UnixBench:
wget http://byte-unixbench.googlecode.com/files/UnixBench5.1.3.tgz tar xvf UnixBench5.1.3.tar.gz cd UnixBench5.1.3 ./Run
Use ServerBear
The above method is fine if you just want your UnixBench score, however if you also want a deeper understanding of IO Peformance, IOPS plus a way to compare your UnixBench score with other servers & plans then we make it super easy for you. Here's an example of one of our reports.
Benchmark Your Server
0 notes
Text
Does Unlimited Hosting Exist?
If you're in the market for any type of shared hosting plan you'll probably see the word unlimited used heavily. But let's take a step back for a moment & understand more about if it's really possible to offer true unlimited hosting.
In What Context Is Unlimited Used?
Let's use Dreamhost as an example, they only have one shared hosting plans which offers the following goodies:
Unlimited Storage
Unlimited Bandwidth
Unlimited Domains
Unlimited Shell Users
Unlimited Email Accounts
Unlimited MYSQL Databases
Unlimited Domains
Unlimited Subdomains
Well know that there's no such thing as an unlimited TB hard drive, or an unlimited amount of bandwidth - these are finite resources (the host must have some sort of hard limit somewhere). Everything else however can technically be unlimited (until the point at which you exhaust your server or finite resources).
All Unlimited Plans Are Not Equal
This is where things get interesting, many hosts will advertise unlimited Storage or Bandwidth but will have reasonable limits stated within their TOS. Users may not be aware of the actual resource usage limits placed on their account (namely CPU & RAM), these two resources have the biggest potential to impact other users on the shared hosting node you reside on.
Here’s HostGators TOS:
Use 25% or more of system resources for longer then 90 seconds. There are numerous activities that could cause such problems; these include: CGI scripts, FTP, PHP, HTTP, etc.
Run stand-alone, unattended server-side processes at any point in time on the server. This includes any and all daemons, such as IRCD.
Run any type of web spider or indexer (including Google Cash / AdSpy) on shared servers.
Run any software that interfaces with an IRC (Internet Relay Chat) network.
Run any bit torrent application, tracker, or client. You may link to legal torrents off-site, but may not host or store them on our shared servers.
Participate in any file-sharing/peer-to-peer activities
Run any gaming servers such as counter-strike, half-life, battlefield1942, etc
Run cron entries with intervals of less than 15 minutes.
Run any MySQL queries longer than 15 seconds. MySQL tables should be indexed appropriately.
When using PHP include functions for including a local file, include the local file rather than the URL. Instead of include(“http://yourdomain.com/include.php”) use include(“include.php”)
To help reduce usage, do not force html to handle server-side code (like php and shtml).
Only use https protocol when necessary; encrypting and decrypting communications is noticeably more CPU-intensive than unencrypted communications.
The restrictions here are quite specific, if you’re running a Wordpress blog & get a massive burst of traffic it’s very likely that you could get your account suspended for utilising too much resources on your node.
The Math Behind Unlimited Bandwidth
There’s plenty of hosts that offer unlimited bandwidth or unmetered bandwidth. The amount of bandwidth that your server can use is limited to the uplink that your hosting provider has to their carriers.
Generally you will find hosts that have the following port speeds (in Megabits or Gigabits per second):
10Mbps: 3.2 Terabytes / month
100Mbps: 32.4 Terabytes / month
1Gbps: 324 Terabytes / month
10Gbps: 3.2 Petabytes / month
100Gbps: 32.4 Petabytes / month
So the limit of the port speed on your server alone will place restrictions on your plan, therefore your bandwidth technically isn’t unlimited (as there is a true point at which your will saturate your connection).
Customer Expectations
We see time & time again that customers fail to understand the true meaning of unlimited. Instead they get drawn in by the allure of unlimited resources at a very low cost.
This in itself creates a number of problems for web hosts:
Your users expect the world from your hosting packages if you do not set realistic resource limits.
This results in complaints & negative reviews online.
This in turn creates another problem, web hosts that offer unlimited plans tend to lock customers in by only allowing them to pay for 1 or more years in advance. Bluehost is a prime example of this. And here's an excerpt from their TOS:
Bluehost does not set arbitrary limits on the amount of disk space a Subscriber can use for the Subscriber's website, nor does Bluehost charge additional fees based on an increased amount of storage used, provided the Subscriber's use of storage complies with these Terms. Please note, however, that the Bluehost service is designed to host websites. Bluehost does NOT provide unlimited space for online storage, backups, or archiving of electronic files, documents, log files, etc., and any such prohibited use of the Services will result in the termination of Subscriber's account, with or without notice. Accounts with a large number of files (inode count in excess of 200,000) can have an adverse affect on server performance. Similarly, accounts with an excessive number of MySQL/PostgreSQL tables (i.e., in excess of 1000 database tables) or of database size (i.e., in excess of 3GB total MySQL/PostgreSQL usage or 2GB MySQL/PostgreSQL usage in a single database) negatively affect the performance of the server. Bluehost may request that the number of files/inodes, database tables, or total database usage be reduced to ensure proper performance or may terminate the Subscriber's account, with or without notice.
So you have to be extremely careful when signing up for 12 or more months as if you use to many resources you could find yourself with a terminated account & no refund.
Still Thinking About An Unlimited Plan?
If you are still considering an unlimited plan then make sure you do your homework, the following tips should help you make a better choice:
Avoid hosts that make you pay upfront for a year or more at a time.
Don't be fooled by the lure of unlimited. Often you'll get better value/performance with hosts that have properly defined limits.
Be careful with "reviews" there are a lot of affiliate only review websites designed purely to make money around popular web hosts.
Always read the TOS to see what the fair use policy is at the host.
Test multiple hosts, you may have to invest a small amount of money to find a host that suites your needs.
Don't expect the world from a shared hosting environment, after all you are sharing a server with potentially hundreds of other people.
Always keep backups, if you account gets suspended it can sometimes be hard to get your files back.
2 notes
·
View notes