Monthly Archives: April 2015

First AWS summit, and despite being held during the worst storm in Sydney in a number of years, they put on a good show.

Datacom: Cloud & Enterprise Tools

While obviously an opportunity for partners to peddle their wares, they kept this focused on  the methodology and process behind determining what workloads should get moved to the cloud. Using business process mapping to break down legacy software, and breaking down the process to discovery, analyse, mapping, profiling, migrate & integrate, it gave an insight to what goes on behind the scenes of some of the larger cloud migration projects.

Business 101: Introduction to the AWS Cloud

I was concerned this would be a bit too business focused, and leave me wanting more technical details, however while it didn't drill down into the nut and bolts, I am glad I included it as part of my first summit.

While it gave a good overview to all the services, something that would be wasted on a AWS veteran, the case study on Reckon is what gave me the most value in this session.

Breaking down the journey from an on premise, to close to all in cloud company covered a huge range of smaller steps their IT department and company as a whole took. It was presented in a way where they focused on what each step achieved, and left it open to how, and the order in which the audience could follow the process. Including steps such and finding a technical champion, legal & compliance, when to think about moving from AWS business to AWS enterprise support, and the one that stuck with me, implementing a cloud first policy. Workload go in the cloud, unless there is a reason to keep them on premise.

Technical 101: Your First Hour on AWS

This satisfied my technical curiosity, and as someone who started a trial account, firing up a micro instance and then wondering what the hell i was supposed to do now, this session was great in covering going from a new account, to a somewhat hardened, user & group level secure account, while diving into VPC's, infrastructure examples, direct connect, billing & cost management, VM services, all the way though to touching on DevOps automation.

Technical 201: Automating your Infrastructure Deployment with AWS CloudFormation and AWS OpsWorks

This was the one I was looking forward to, and it did not disappoint.

It was a solid segue from the previous session, where once you had dipped your feet into a range of services, it drilled down into their DevOps stack.

As an AWS newbie, the step by step through the whole Cloud Formation & Opsworks DevOps stack was great. Full of solid use cases, and followed up by real world examples, and lessons thanks to Mike Lorant from Fairfax.

The important stuff!

Loot

Big thanks to the Puppet booth team for the Pro Puppet book!

 

While chrome developer tools allow you to test for many http post vulnerabilities relating to invalid post data, testing for a slow post vulnerability needs Googles slowhttptest tool.
https://github.com/shekyan/slowhttptest

The installation in very straight forward
https://code.google.com/p/slowhttptest/wiki/InstallationAndUsage

As the docs say, you'll need lib-ssldev. When missing you'll get:

My LMDE does not have this out of the box, though Synaptic delivered.

 

After the make, running slowhttptest hits up localhost by default. Nothing interesting without a local test server.

Shekyan also includes the syntax to launch an example of the SLOWORIS attach to test on your own servers.

 

With the success on preliminary benchmark on my use case for Elasticsearch . I thought I would see how it ran on a ARM based ODroid U3.

The U3 is a credit card sized mini PC from HardKernel, that runs Android or Linux.

The Odroid U3 Specs include a 1.7GHz Exynos4412 Prime Cortex-A9 Quad-core processor, and 2GB RAM. While it support MMC storage, I'll be using a 16GB Sandisk Ultra UHS I Class 10 SD Card, in part to makes things interesting, and in part so I easily swap out my Android XBMC MMC between projects.

I have gone with Ubuntu 14.4 from the ODroid forum site.

Oracle Java 8 via apt-get was straight forward, however elasticsearch via packages.elasticsearch.org did not explicitly support armhf.

I added the following to /etc/apt/sources.list as outline by the docs

However apt-get update gave me the following error.

As Elasticsearch runs in Java, I figured running the x86 version would be fine. Just needed to figure out how to do it.

After hitting a dead end after editing /etc/dpkg/dpkg.cfg.d/architectures, I tried adding architecture tags to /etc/apt/sources.list as outlined in the Multarch/HOWTO.

Worked a treat, package sources updated, and elastic search installed as a deb package.

Like any software raised on linux, even where it runs under a JVM like Elasticsearch, running it on windows bring to light a few quirks.

One of the most common Elastic Search Environment Variables is ES_HEAP_SIZE, shown in the System variable panel below.

With the default set to 1GB, setting this is often done early on, though note the following 2 gotcha's in windows.

  1. After you set the ES_HEAP_SIZE, you need to re-install the service. Restarting ES won't do it.
  2. If you are restarting the service from the command line, remember to open a new CMD window after setting the Environment Variables, A stale window will hold the old value, (or have none if none was set) and restarting the service in that cmd.exe session won't update the heap size.

Here is how you can ensure that your Environment Variable took under the JVM section via /_nodes/stats?pretty=true

Also, remember to not cross 31Gb!

http://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html