Hello, my name is James. I'm a Software & Database developer, and Design things for Print and the Web. Phew, that was a mouthful.

Looking back

In 2011, having just taken the plunge to join a long standing friend in his endeavour to set up a creative agency, I’d began paving the way for a career change that’s put me where I am today.

Embracing Social Media helped this fledgeling agency pick up new clients and I worked closely with a number of business owners and stakeholders to develop their social media presence and effectively leverage social media to engage with their prospective customers, supporters and audiences.

Back then, Social Media was a relatively new ‘buzz word’ for business, and doing it right was something that didn’t come naturally. Trying to get the point across that broadcasting your message was never going to be effective had me banging my head against the wall frequently, it seems this lesson is still as relevant now as it was then.

Personal circumstances lead to the agency ultimately closing and I faced redundancy, but a new opportunity with a professional connection led me to a new, multi-disciplinary role that developed over the next four years into leading the day-to-day operations of an IT team, supporting 175 users and the infrastructure of the market leading financial services business it has now become.

Today I’m not able to dedicate the same time and effort to social media as I used to, with my role evolving towards management of an IT department, I’ll find opportunity to talk about new and upcoming technology, and share information, tips and tricks I think might help though, so hopefully my site will find some new life.

As part of my reflection on how I got here, I decided to resurrect some of my ‘tools of the trade’, mainly to see if anyone visited my blog anymore, and was quietly surprised to see some traffic coming through.

‘Reputation Management’ – also known as googling yourself, was an important task for anyone trying to establish themselves as a credible source of information, so out of curiosity, I wanted to see just what Google held on me. I was delighted to find that some of my work had been picked up by BusinessInsider in their article Humanizing Your Social Media Efforts, the site is ranked in the top 200 sites in the world in terms of traffic volume, so huge exposure for me!

This might help get a bit more traffic to the site today, and explain why you are reading this, so I’ll be thinking of some relevant, useful and quality content to put here soon

 

Starwind Virtual SAN and HP StoreVirtual VSA – side by side

An upcoming project I’ll be involved in centres around High Availability and Disaster Recovery, and whilst Failover Clustering ticks a number of boxes on the high availability front, that does come with some additional caveats. I wrote recently about areas of weakness I’d found in a traditional Windows Server Failover Cluster, the main thing being shared storage introducing a new single point of failure.

To counter this, there were a number of possible options, at a variety of price points, from dedicated Hardware SAN devices in Cluster Configuration themselves, to software based Virtual SAN Solutions which claim to achieve the same.

This is a brief update on my experiences of Virtual SAN, based on two products, HP StoreVirtual VSA and Starwind Virtual SAN. I should note these are not performance reviews, just some notes on my experiences setting them up and using them in a lab environment.

Starwind Virtual SAN.

paste_image_1400

In it’s basic form, this is a Free piece of software, support is provided by a community forum, and there are naturally commercial licenses available. This edition can allow you quickly provision iSCSI based storage to be served over the network, but has no resiliency built in. I implemented this recently to provide additional storage to an ageing server running Exchange where physical storage was maxed out, and so network based was the only option, but Exchange needed to see this more as a physical disk than use a network share.

There is a two node license available for this, providing replicated storage across two servers running the software. This is where it provides real use, as you’ve now introduced storage resiliency given it’s available in two places. From experience, once the initial replication has taken place, provided you’ve set up your iSCSI connections and MPIO to use the multiple routes to the storage, powering down one of the servers running Starwind Virtual SAN had no impact on the access to data provided by the Virtual SAN. Once the server was powered back up, it took a little time to re-establish it’s replication relationship, but I’m going to write this off to my environment.

The software can be used in one of two ways, you can install it directly to your server (Bare Metal) or you can install it to a Virtual Machine, with Hyper-V and VMWare VSphere both possible. There are benefits to installing directly to your server, mainly being RAM usage and not having the overhead of hosting a full OS install running on a VM on top of your hypervisor. Two network connections are required, one as a synchronisation channel, ideally using a direct connection between the two servers, the other connection is required for management and health monitoring of the synchronisation relationship.

For extra resilience, if the license permits, a further node can be added to the configuration that is off-site, for Asynchronous replication.

HP StoreVirtual VSA

paste_image_1416

StoreVirtual is a virtualised storage applicance provided by HP, originally a product known as LeftHand. It is only available as a Virtual Machine and so adds some overhead to it’s implementation, using at least 4GB of RAM, which increases dependent on the capacity hosted. Supported on both VMWare and Hyper-V platforms, there is a wide market for the product.

The StoreVirtual VSA can function as a single node, and equally works in a multi-node configuration with scale-out functionality. Because it cannot be installed ‘Bare Metal’ other than on a dedicated hardware appliance, performance therefore has a potential to be slightly impacted given the overhead in the Hypervisor providing access to underlying physical storage.

In terms of management, there is a dedicated management interface, provided by installing the management tools on another computer (or VM) on the network. Here, it’s simple to provision storage, set up access control in terms of who can access this storage, and see health and performance information.

High availability is achieved not through MPIO, but through presenting a group of servers as a single cluster, this however needs to be managed by a further Virtual Machine running a role called Failover Manager (FOM), which again adds to the overall overhead of the implementation. In an ideal scenario, this would be hosted on hardware independent of the other two nodes to avoid a loss of Quorum. StoreVirtual also supports Asynchronous replication for off-site replication.

Update: for clarity, FOM is required when an even number of nodes are active, to ensure a majority vote is possible for failover purposes.

Limitations of Testing

My lab consists of 2 x HP Microserver Generation 8, both with Intel Xeon E3 series processors and 16GB RAM, both are connected to a HP Procurve 1800 managed gigabit switch. With only 16GB of RAM on each Hypervisor, it’s difficult to simulate a real-world workload on the I/O front, particularly when at bare minimum, 6GB needs to be allocated to StoreVirtual and a FOM on one of the machines, and 4GB for the redundant node on the other.

Pros and Cons

Starwind:

Pro – Installs directly to Windows Server 2012 R2 or to a VM
Pro – Relatively low memory footprint
Pro – Lots of options to tweak performance, can leverage SSD cache etc.
Pro – Generous licenses for evaluation purposes – 2 nodes (provided they are VM based) licenses are available free of charge

Con – I’d heard of Starwind before, having used a few of their useful tools, but would you trust their solution with your enterprise data?
Con – Caught up with a full resync when one node was shutdown and restarted, and it took some time to re-establish the synchronisation

HP Storevirtual:

Pro – A brand name you know, and might find easier to trust
Pro – Up to it’s 13th Version, the underlying OS is proven and stable
Pro – Intuitive management tools

Con – Must be ran as a VM, minimum RAM required is 4GB for a StoreVirtual node. A Failover Manager is required to maintain Quorum in a 2 node configuration
Con – 1TB license expires after 3 years, so for lab use, prepare for the time to come

Closing thoughts

I can vouch for solid performance in Starwind Virtual SAN, as the shared storage for my lab’s Hyper-V highly available VMs is running on a 2 node Starwind Virtual SAN. Ultimately, lack of Hardware available to perform a comparable test has meant I have not been able to use StoreVirtual to host the same workload. The licensing of StoreVirtual put me off a little, Starwind’s license is non expiring, but the 1TB license for StoreVirtual on offer is restricted to 3 years.

Once I’ve found some suitable hardware to give StoreVirtual a fuller evaluation, I’ll add more detail here.

 

Link Aggregation between Proxmox and a Synology NAS

I’ve been using Synology DSM as my NAS operating system of choice for some time, hosted on a HP N54L Microserver with 4 x 3TB drives and a 128GB SSD. This performs well and I’ve been leveraging the iSCSI and NFS functionality in my home lab, setting up SQL Database storage and Windows Server Failover clusters.

Having Proxmox and Synology hooked up by a single gigabit connection was giving real world disk performance of around 100MB/s, near enough maxing out the connection. For Synology to have enough throughput to be the storage backend for virtual machines, this would not cut it, so I installed an Intel PRO/1000 PT Quad in each machine giving an additional 4 gigabit network ports.

Proxmox itself supports network bonding modes of most kinds, including the one of most interesting, balance-rr (mode 0) which will effectively leverage multiple network connections to increase available bandwidth rather then provide fault tolerance or load balancing.

I could easily create a 802.3ad link aggregated connection between each, which worked perfectly, but serves no purpose in a directly connected environment other than providing redundancy as the hashing algorithms for load balancing will try and route all traffic from one MAC address to another via the same network port, so I set out to investigate whether the Synology could support balance-rr (mode 0) bonding which sends packets out across all available interfaces in succession, increasing the throughput.

Note: You’ll need to have already set up a network bond in both Synology and Proxmox for this to work, I won’t cover this here as it’s simple on both platforms. I’ll be talking about is how we can enable the mode required for the highest performance.

The simple answer is no, Synology will not let you configure this through the web interface, it wants to set up an 802.3ad LACP connection, or an active-passive bond (failover is in mind rather than performance). I found however that provided you’re not scared of a bit of config file hacking (well you probably wouldn’t be using Proxmox if you didn’t know your way around a linux shell and DSM is based on linux too) you can enable this mode and achieve the holy grail that is a high performance aggregated link.

Simply edit /etc/sysconfig/network-scripts/ifcfg-bond0 and change the following line:

BONDING_OPTS=”mode=4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast”

to

BONDING_OPTS=”mode=0 use_carrier=1 miimon=100 updelay=100″

Now, reboot your Synology NAS and enjoy the additional performance this brings.

For reference, here’s the output from ‘iperf’ performing a single transfer:

root@DiskStation:/# iperf -c 10.75.60.1 -N -P 1 -M 9000

WARNING: attempt to set TCP maximum segment size to 9000, but got 536

————————————————————

Client connecting to 10.75.60.1, TCP port 5001

TCP window size: 96.4 KByte (default)

————————————————————

[  3] local 10.75.60.2 port 37463 connected with 10.75.60.1 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  3.40 GBytes  2.92 Gbits/sec

Not bad?!?

High Availability and DR in SQL Server 2014 Standard

In my day job it’s part of my role to consider ways in which the IT department can work more effectively, as well as ways we can get our IT infrastructure to work better for us. A project that’s currently under way is migrating from SQL Server 2008 R2 to SQL Server 2014 Standard. The current plan is that it will run on it’s own box, and whilst it will have the horse power to deal with the load, this approach is ultimately vulnerable to a number of different types of failure that could render the database server unusable and adversely affect the business.

Part of my studies towards MCSE Data Platform involves High Availability and Disaster Recovery strategies for SQL Server, but most of the features are noticeable absent in the standard edition of SQL Server.

So, how can I work with Standard and still give us some type of fault tolerance?

I’m currently exploring either physical or hardware failover clustering using Server 2012’s built in Failover Clustering services along with a SQL Server 2014 cluster – Standard Edition, provided is correctly licensed (either through multiple licenses or with failover rights covered by Software Assurance) will allow for a two node cluster.

Windows Failover Clustering has reliance on shared storage however, thereby introducing another potential point of failure in the storage platform that would also lead to downtime.

Failover Clustering is great, but how do I provide fault tolerant storage to it?

I’ll document here my experiences with both hardware and software solutions to this.

I’m considering Synology rackmounted NAS devices in high availability configuration, but the potentially more cost effective solution is to virtualise a VSAN in a hypervisor of choice. SANSymphony and StarWind Virtual SAN are options I’ll consider. All of this will need to be tested in my home lab, which is a Lenovo Thinkserver TS440 with Xeon e3 Processor, 32GB RAM with 256GB SSD storage backed by a HP N54L providing shared storage via iSCSI – it runs Proxmox as my hypervisor of choice which is a platform I’ve been using for a number of years before Hyper-V really took off. It’s open source with commercial offerings, and uses KVM / Qemu – the solution must work here first.

I’ll post an update soon.

What to do when your SA account gets locked in SQL Server

By default, SQL Server 2008R2 when using mixed mode Windows and SQL Server authentication, sets up the SA account with a password policy, set to lock after a number of failed login attempts. This is particularly troublesome when a rogue process attempts to login with an incorrect / outdated set of SA credentials, and it’s all too easy to skip over setting up additional administrators with Windows accounts.

On my development server, where I have a number of projects underway, I naively missed the step of setting a local or domain account as an administrator, meaning SA was the only account with sysadmin privileges on the instance. This, paired with the default option of enforcing password policy on the account, meant it was too easy to inadvertently lock the SA account, losing access completely to the entire contents of the databases.

Apex SQL produce a number of SQL related tools, for the one I was trialling, one of the first steps of running their software is to setup a database connection, you enter a server / instance name, and choose Windows or SQL authentication. A helpful (but dangerous) feature is that this software appears to attempt to connect using credentials as you type, this leads to the SQL server being spammed with incorrect logins if you’re not quick, eventually leading to the account being locked.

Time to panic.

The trick, in this circumstance, is to make sure you are logged on to the server with an account with local administrator privileges. As long as you have this, you can leverage SQL’s administrative connection in Single User mode. To achieve this, shut down the SQL Server service for the instance – remember this will disconnect any / everyone on the instance, so only do this out of hours, when you have no choice, or on a server only you are connecting to.

Then, open up a command prompt with administrator privileges and navigate to the SQL executables for your instance, it’ll be something like: c:\Program Files\Microsoft SQL Server\MSSQL10_50.1\MSSQL\Binn

Run sqlservr.exe with the additional switch of -m and you’ll fire it up in Single User mode. Now, open up management studio and go and connect using Windows Authentication. With a bit of luck, you’ll be in.

Now, go unlock the SA account. You’ll have to change the password as part of the unlocking process, but go ahead and change it back once this has been completed if it’s needed.

With this complete, you can terminate the SQL Server running in single user mode by hitting CTRL + C and confirming with Y. Now, bring up the SQL Service, and normality should be restored.

Social Media Masterclass at Business South

I was privileged to be invited to speak at the Social Media Masterclass at Business South 2012. I spoke about Facebook as part of a panel of Social Media experts to offer some insight into using Social Media for business.

The panel fielded questions that were pre-recorded by some of the estimated 2000 delegates who attended the Business to Business event, as well as taking questions from the attendees at the session – an estimated 200 local business leaders took part at the WOW Business Growth Zone at the event.

It was a real pleasure to see my experience in the field of Social Media acknowledged by being invited to speak at this prestigious event.

KLM prove they ‘get’ social media

Dutch airline KLM, who you may remember me writing about previously with their ‘Tile and Inspire’ campaign have demonstrated they have a great understanding of social media with a number of successful and innovative campaigns in the past.

Their latest stroke of social media brilliance goes by the name of ‘Seatmates’ – whilst not a completely new concept, taking inspiration from Ticketmaster’s interactive seat maps, allows passengers to choose seats based on the social media profiles of those already on the flight.

The service currently works with Facebook and LinkedIn, and using the service is completely optional – passengers are of course able to opt-out of the feature, or at least restrict what information is published about them, but could prove a great way for passengers to meet and interact with people who share the same interests or other characteristics.

It would certainly liven up a 15 hours transatlantic journey knowing I could choose to sit next to someone I would share some common ground with.

I wonder what’s next for KLM?

Think it’s ok to Cross Post on Social Media? Here’s why you shouldn’t

Cross-Posting is the process of linking together Social Media accounts and making a single post that reflects across each platform.

It might be tempting to link together your Social Media accounts to save a little time when you’ve just written a great piece of content, but don’t do it, please.

What you write about your post is probably more important the post itself so far as social media is concerned – after all, what’s the point of writing great content, if nobody is prepared to read it.

Facebook, Twitter and LinkedIn particularly, have very different users and therefore culture – Twitter is fast moving and constantly changing – the average tweet has a lifespan of seconds so you need to write your post in such a way that it captures the attention as it flies past in the stream. Facebook posts can be highly targeted, and are generally to a restricted audience – your Business Page (unless you want to spend £s on promoting your content) so you can go into more detail and make it more appealing to your fans. LinkedIn is geared for professional users, discussing business related topics so you may find that not all content is suitable. All 3 are very different, so 1 post will certainly not fit all.

A carefully crafted 140 character tweet about your new blog article will go down much better than a shortened post, linking to Facebook, linking to your blog. In this case, you’ll lose potential readers who choose not to have a Facebook account, or don’t want to sign in. I’m certainly not going to sign in to Facebook to visit a link I found on Twitter!

Cross Posting could be perceived as laziness, do you really want to be perceived as not caring about your fan base, and not willing to embrace the fact that each network has it’s own culture – you are not a robot, you are representing a brand and have a personality?

It’ll also make it difficult to track your users – how can you find out if your Tweet was effective, when your referral traffic has been routed through Facebook before arriving on your web page?

Spend a little time crafting your posts, tailoring your content and knowing how to engage with your audience and you’ll find it might not take as much time as you expect.

Do you cross post? if so, does it work for you?

Social Media – not just another Megaphone

As part of Mashable’s Social Media Day in 2010, I attended a meetup hosted by a local marketing company, that included a seminar on Social Media for business.

Attended by 70 local business owners, directors and managers, the seminar aimed at introducing the business benefits of using social media to generate sales, awareness and engagement.

Part of the presentation included a slide entitled ‘Just another megaphone’. That, to me, says that the marketing agency intended to teach you to use Social Media as a method of broadcasting a message and hope enough people pay attention to it. At the time, Social Media for business was a very new consideration, and broadcasting a message seemed to be the right way to use it, at least in their eyes.

Of course now, everyone knows that Social Media as a broadcast platform is a terrible idea – yes, there can be a mixture of broadcast, conversation and sharing, but to do Social Media properly, you must recognise your users and actively engage with them – start a conversation, ask questions, make comments. Broadcasting alone just won’t be tolerated.

So has Social Media changed, or has our perception and understanding of how to use it to communicate with our audience changed? Comment below on your experience of using Social Media for business – do you use it to broadcast messages alone, has it worked for you?

The first rule of London 2012, you don’t talk about London 2012…

… that’s the message LOCOG, the London Organising Committee for the Olympic Games are telling the 75,000 Olympic Volunteers, and hundreds of thousands of other hopefuls, in the lead up the the biggest sporting event in the world.

As someone who uses social media daily, in both personal and professional capacities, I can understand the need to produce guidelines on what can and can’t be done, but 75,000 volunteers are finding themselves unable to tell people they’ve been selected for a role, what they are doing and where they are. They’re also forbidden from publishing photos and videos.

Security is a huge consideration in this year’s games – so publishing your location, or taking photos of something potentially sensitive are guidelines I can completely understand and support.

I can tell you I’m one of the 300,000 who applied to be an Olympic volunteer – or GamesMaker as they’re being called. To be a part of London 2012 is something I would be incredibly proud of, and would want to tell my family, friends and other contacts about. Especially as I will be giving up two weeks of annual leave from my full-time job, to give up my time for free to be a part of the Games. Unfortunately, I can’t tell you any more than this for fear of breaching these stringent policies.

Do you think this is taking things too far, is censoring the 75,000 biggest fans (they must be, to be willing to give up their time, travel across the country and all this without being paid) of the Olympics a wise move?

© 2011 James Coleman