Site Update: The Camel’s Back is Well-and-Truly Broken!

As this site slowly steams towards its second anniversary at the https://goughlui.com address, and (nearly) half-a-million views, this week has been a week of terrible service. The site was down or intermittent for the better part of the past two days, during Australian business hours. Despite all the care in employing CloudFlare and WP Super Cache, it seems that the load of my site, combined with the abuse of resources by fellow sites sharing the shared cloud hosting platform was too much, causing the server to collapse into a pile of 503 and database errors which it could never fully recover from.

Some readers might not have noticed this, because CloudFlare was doing a good job of keeping the cached homepage up and running, but anyone who tried to access the articles was sternly met with “Error establishing a database connection”. Crawlers, such as the Googlebot, were also complaining to me that my site was down.

When I started this site as a hobby, I could never imagine reaching this level of readership. My take was to choose a low-cost hosting plan based in Australia, as that was where my friends were, and that (very) small downtimes were acceptable as I couldn’t do any better from my home ADSL2+ connection anyway! But since it has grown, it has become a resource for many niche topics, where information is found for esoteric technological products, thus making it both moderately trafficked and valuable.

I must apologize to my precious readers for the unreliability. I know just how frustrating it is when you’re looking for something, only to be thwarted by an error message. I have always meant to keep this site alive for as long as I could afford to do so, its time is not up just yet. Don’t panic!

The issues also reflect poorly upon myself, and my “economically” driven choices. In all, I’m happy to report that my experiments with ad revenue have proved to be successful in offsetting the cost of the hosting with Ziphosting, in part, due to the low cost of hosting.

Despite living with Ziphosting for the better part of 2 years, and its very random “moody” quality of service level, I felt that it was time to leave (even though the time that I had paid for, is not actually up quite yet). Thank you Ziphosting, but this downtime is not downtime I can swallow anymore.

As a result, I am pleased to announce that after a sleepless night, and leveraging the connection at my university, I have been able to migrate this site almost seamlessly to a new account at GoDaddy, with hosting for the next five years paid upfront (out of my wallet). While every hosting provider is always going to have nightmare stories, I thought I’d take a bet on this one because of their long-running track record, Australian billing, and promotional pricing. It’s about twice the amount I paid for Ziphosting, but I’m fairly confident that my advertising revenue should help me break even or at least reduce the cost to something less burdensome.

As my first substantial WordPress migration, I’m quite happy to see it went easier than I expected. It was just another one of those “big experiments” which worked out in the end.

My initial experience was a little confusing, moving to the GoDaddy cPanel system, but I got the hang of that pretty quickly. GoDaddy’s resources are relatively limited for hosting accounts, but appear to be sufficiently provisioned and limited to avoid adverse effects. Only time will tell if their reliability is going to remain.

But one thing’s already apparent – GoDaddy is faster than Ziphosting.

better-godaddy

Long live goughlui.com!

The setup still retains the use of CloudFlare and WP Super Cache to give the site the best chance of survival. The same anti-spam solutions are in place as well. If you find anything broken, please let me know. The e-mail address may be broken for 24 hours as cached DNS MX records expire and get reloaded.

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Uncategorized and tagged . Bookmark the permalink.

9 Responses to Site Update: The Camel’s Back is Well-and-Truly Broken!

  1. George says:

    Thanks for everything you do to keep this site going. It’s a really nice and interesting read! 🙂

    • lui_gough says:

      Thanks for your comment! Always great to know the commenting system works … *eyeballs the 150+ spam messages caught overnight*. It was a marathon to get it changed over, but I’m just glad it’s back up and running 🙂

      – Gough

  2. shasheene says:

    The real problem here is you (along with many other bloggers) are using WordPress – a dynamically generating pages (which executes relatively heavy PHP code underneath) to produce a static webpage. This makes costs a lot of computational resource per user.

    There are a few other options, for example Jekyll (by the open-source software collaboration company, GitHub) takes a local folder of blog posts (formatted in say, Markdown) and generates a static blog (with comments provided through external free services like Disqus, and even images hosted on Dropbox if you really want). Static HTML sites can be hosted on extremely low-end systems and can be scaled to many millions of readers for barely any cost at all, as it’s just serving few hundred kilobytes webpage – no database queries required to generate a webpage.

    (I haven’t yet posted any of my draft blog posts, but I am really impressed with the system – I recommend tech enthusiasts who want a self-hosted solution to check it out)

    • lui_gough says:

      Very true – with every convenience of data-base driven PHP systems comes the overhead of computing and generating the page for each viewer. Luckily, with WP Super Cache, this can be reduced somewhat by pre-loading pages and caching generation results for a few hours to days depending on whether there are any changes. It’s definitely helped reduce loading, along with CDN caching of some static resources. It’s what a responsible shared hosting “user” should do.

      Unfortunately, in the case of Ziphosting, they were giving 503 for even static resources from time to time. Their servers were way overtaxed in a way where even basic web pages couldn’t reliably be served, let alone database-driven ones. Although, that being said, if it was static (like my legacy pages), the chances of it remaining up is definitely higher.

      While I’ve settled on WordPress out of convenience, it’s not an ideal solution for everyone. I think I will persist with WordPress for the foreseeable future, mainly as that’s what I’ve started with and I’d rather not have to maintain several different CMSes.

      Thanks,
      Gough.

    • sparcie says:

      I think declaring that static content is the answer for everything is also a bit silly. It is true that static pages can be served in greater number and quite a bit faster. But it doesn’t meet the needs of everyone. The best part of any CMS is that they are incredibly convenient and easy to use for even the most technologically challenged. Personally I use wordpress not because I couldn’t do anything else, but simply because using it allows me to concentrate on generating content and not the technical aspects.

      I’d imagine the static system you’re describing might also have some inconveniences, like the down time required to update the static pages anytime anything is changed. For a large site the down time could be quite large. I also wouldn’t like to rely on external services for anything too important for obvious reasons. I’m also sure most people would find writing their site in any kind of code (markdown or otherwise) kind of cumbersome.

      I wonder how slow a machine you’d be using if you couldn’t run PHP and mysql? The only machines I have that fit that description are more than ten years old, and even then I can think of one that would be ok for small volumes. Considering this isn’t being hosted on a ‘extremely low-end system’ I wonder how relevant the argument for static pages even is.

      Cheers
      Sparcie

      • shasheene says:

        Yep I agree it’s not a solution for the vast majority of people.

        It’s probably suitable for static sites for throw-away projects hosted on a virtual private server ($15/year for 256MB RAM systems level) – this quite old post is interesting http://lowendbox.com/blog/yes-you-can-run-18-static-sites-on-a-64mb-link-1-vps/

        The actual official use case is programmers who already do their work using a version control system can easily write posts, save them, the blog automatically “compiles” after every change and the generated static site just needs to be pushed to GitHub it will be automatically be published with no intervention (and the hosting provider isn’t running any code they don’t control, which is good for security).

        Tuning the system and templates takes some time and the workflow isn’t the most straightforward, so I completely agree that CMS have their place. I didn’t know about the WordPress performance optimizations either!

        • lui_gough says:

          While that’s definitely admirable, I don’t think I’m up for it just yet. Time is always a factor, and while I’d always love to learn more, time often gets in the way and I just love to “get things done”. Of course, I’ve run into instability with the old host due to their failure to properly provision and limit resource use to each user, and as I wasn’t “promised” any particular level of computing power, I probably got less than I deserved. At least, at GoDaddy, they have some performance indicators which show resource usage, although the actual resource limits are pretty limited for the basic low-cost ~$5/mo hosting – 0.25 of a CPU (what spec, I don’t know), 1MB/s disk I/O (and thus, network I/O), 512Mb RAM (any swap? not sure), 100 process entries (and thus number of ports). With optimizations, it seems to be sufficient, up until someone decides to open many posts in new tabs where we do hit the limiter occasionally, but the long term average shows lots of headroom. I suppose what I don’t get in optimization and can’t afford to spend the time on will just come back to bite me in the backside with paying for resource level upgrades. 🙂

          Many larger sites do use WordPress, resulting in a complex patchwork of optimizations, multi-site arrangements which I have little knowledge about at this time. Such arrangements often happen because companies would rather have it easy and get it done, and pay someone else to “manage” their installation. That being said, I’ve never planned on being too famous, so I don’t think I need to worry that much about it just yet. Hardware comes easy – I could probably more than easily fulfill all the needs of the site if I could host it at home, but alas, the internet connectivity is a problem and with the nixing of the FTTP NBN plan, getting any decent upload (>3Mbit/s) at home is a bit of a dream.

          However, it is very true that no matter the resources, a large number of visitors would easily soak it up and so being efficient with static content might be the only option for some hosts or sites.

          – Gough

        • sparcie says:

          Whilst it sounds kind of cool, as a programmer myself I see that it would be kinda annoying to do all the time. Doing things the hard way can be fun, but I wouldn’t want to do it all the time.

          That article you linked was pretty interesting, but they have overlooked the fact that memory usually isn’t the limiting factor in hosting. I have an old SPARC system (about 20 years old) I used to host my download site and it only has about 320Mb of RAM but would barely use 60-70 on boot, and it was hardly optimal as it could run apache 2.4 quite well and had workstation stuff installed.

          How fast was it? It has 2 50Mhz processors and 1 60Mhz one, hardly a beast, but it managed serving static pages quite well. It couldn’t run mysql or php based sites on the basis of CPU power alone.

          I understand why they talk about memory though, as that’s basically one of the ways they limit VPS solutions. Yes they limit the CPU power as well, but it isn’t as limiting.

          Note if you do run your own webserver (VPS otherwise) you do need to be concerned with the security of your machine. Not just the web server but all services such as ssh, mysql etc. Make sure you set up a firewall!

          Cheers
          Sparcie

          • lui_gough says:

            Yeah, security can be very hard to get right especially with unpublished exploits. Best practices only go so far, and sometimes you really do need to take it to the extremes with checking user permissions, or even putting things into chroots to ensure they don’t “escape” and touch things they shouldn’t. Keeping things up to date is practically a necessity for everything that’s internet-facing.

            I used to run this site at home on a Raspberry Pi (overclocked to 1Ghz) – Apache + MySQL + PHP ran very slowly, about 5-seconds to build a page. You can imagine it wouldn’t take more than 5 users to start totally busying it out to unresponsiveness.

            Limiting CPU power isn’t too limiting – I suppose that’s true to some extent. When MySQL tables start to get big, queries can blow out the amount of CPU required to build a page, and if you don’t leverage caching then the penalties grow over time as the site grows. Memory is probably a bigger issue, especially if it’s a hard limit, because that’s a sure way to make processes crash and misbehave. Of course, for static content, the memory usage is generally a small amount per socket mainly because of the forked process handling that particular connection, whereas for dynamic content you have to add the cost of a PHP interpreter and the mySQL database engine.

            All in all, I still wish I could serve from home – ultimate control would be lovely to have, and backing up would be so much quicker, but the cost and technicalities of it seem to favour the shared hosting option, along with its limitations. I suppose one of the biggest annoyances would be that the compromised server could potentially compromise the other machines on the network at home as well … so I suppose that’s one benefit of not having it at home.

            – Gough

Leave a Reply to GeorgeCancel reply