save money by scheduling servers

As companies expand, the variable costs of business growth need to be managed carefully. However, finding ways to control these costs can often become difficult, as it’s not always easy to reduce costs without making compromises elsewhere within the organization.If you’re operating in the cloud, controlling AWS EC2 costs can be a lot more manageable. One way to control AWS EC2 costs being through scheduling your servers so that they are only switched ‘On’ at times when you really need them.

Why schedule servers?

Many companies often face larger than expected AWS EC2 bills because they aren’t always aware of how much it can cost to keep all of their servers running. So when trying to reduce these bills, the first step is often to figure out which servers need to be ‘On’ for an extended period of time, and which servers only need to be switched ‘On’ at certain times.

Scheduling servers is a great way to save on EC2 costs in cloud computing. By scheduling servers, you can dictate when your servers will be switched on and off, enabling you to gain greater control over how much you’re paying for your cloud computing services.

For example, many 9-5 companies only need servers to be switched on between 9am to 5pm on weekdays. So by scheduling servers to be switched on only during the hours of the working week can help you to achieve significant cost savings.

Cost savings when scheduling servers

Let’s take a look at the example of running ten t2.large instances at a cost of $1.04 per hour, for a Linux operating system in the US East (N.Virginia) region, over the course of one week, on-demand.

The costs of running these instances 24/7 over the course of a week would be $174.72. If these instances were left to continue running over the course of a month (28 days), then the costs significantly increase to $698.88.

Now let’s change the circumstances in which your servers operate so they are now active only during the working hours of a five-day-week (9-5, totaling 40 hours). The cost of running these instances now drops to $41.60. Over the course of a month (28 days), the cost amounts to $166.40.

By scheduling your servers so that these instances only operate during the times in which they are needed, you already lower your costs by $532.48 a month. That’s a saving of over $6000 per year. Not bad, right?

Scheduled Reserved Instances

AWS also offers a way to make use of scheduling instances. With AWS, you can purchase reserved instances on a schedule, obtaining all the computing capacity you need ahead of time.

Pricing for these instances will fluctuate with supply and demand, as well as with the time at which you want to schedule these instances. Yet, even if prices increase since the time that you purchased these scheduled instances, you pay the price at which you made the purchase and nothing more.

Through scheduling reserved instances, you also gain a discount on the on-demand price of instances. So when you schedule instances during peak hours (Monday-Saturday), you gain a 5% discount, and by scheduling instances during off-peak hours (Saturday-Monday), you gain a discount of 10%.

Ultimately, scheduling servers is a cost-effective way to run your cloud computing operations, enabling you to gain greater control over servers and save a whole heap of money on your AWS EC2 bills.

 

Improve Amazon EC2 Security

If you’re considering a cloud computing infrastructure for your business, or perhaps if AWS is something you’re implementing already, then you’ll understand why Amazon EC2 security is a top concern for many organisations utilising the platform.

AWS EC2 Security Concerns

For most AWS organisations (especially those dealing with sensitive information) the scenario of losing private customer data to a DDoS attack or through an oversight in access management is not only a disaster in terms of the down-time needed to resolve the problem, but also hugely damaging to customer trust and brand credibility.

It’s also no exaggeration that if an attack is significant enough to corrupt a business’s central data, then it’s possible that the company may close down as a result – as was unfortunately the case with Code Spaces.

In fact when the Cloud Machine Manager team recently attended Amazon’s ‘AWSome Roadshow Day’ in London and keynote speaker Tom Woodyer (Technical Instructor) asked the audience what their main concern with AWS was, the answer was (you guessed it) security!

4 Amazon EC2 Security Considerations

It’s very easy when a data breach occurs for users to hastily point their finger at AWS EC2 security and declare it their fault, but in actuality this isn’t always the case. Amazon has gone to great lengths to disprove this opinion by providing users with a significant amount of relevant resources to help keep themselves protected.

However, because AWS is so multi-layered and quite complex, finding this information can become a challenge – that’s where our 4 considerations for Amazon EC2 security can help:

1) The AWS Shared Responsibility Model

When it comes to AWS security all roads start from the Shared Responsibility Model, which is effectively Amazon drawing a line in the sand and making it clear that while they oversee the “security OF the cloud” it’s the customers responsibility to handle their data and it’s “security IN the cloud”.

What does this mean? Well it means that while Amazon will operate, manage, and control the host operating system – from virtualization through to security of the premises that contain the physical servers. The customer maintains the responsibility to look after the guest operating system, security updates, configuration of the EC2 security group firewall (more on that later), and managing any controls for associated software (there’s a nice video summary of all this here). But essentially, the model does a good job of clarifying the AWS landscape in terms of security, and highlighting where your attention is most needed.

2) EC2 Identify and Access Management

The ‘identify and access management’ (IAM) service from AWS, is a permissions based tool that allows network administrators to manage AWS users and the resources they can access – without having to share a password or key with them.

When used for AWS EC2 security, the IAM service can similarly attach user-based permissions to an ‘IAM Role’, and then launch them alongside EC2 instances so applications can securely access AWS service APIs. This is ideal for controlling which AWS users within your business can perform specific API actions, and thereby limiting the potential damage of someone doing something they’re not supposed to.

3) EC2 Security Groups

EC2 security groups are essentially traditional firewalls, but implemented within the AWS virtual environment and block or coordinate the level of traffic they receive depending on the access rules assigned to their ports and protocols – thereby reducing the threat of a hacker breach.

However, by taking the time to set up your security groups properly and configure them towards your businesses security rhetoric (rather than just for the instance), singular policies can then be applied across multiple EC2 instances. This helps to improve your AWS defences on a larger scale whilst also progressing positive security administration.

4) EC2 Encryption

Encryption has become a cornerstone practice for large-scale data security, as it makes life difficult for uninvited viewers (hackers) to read stored data, but unlike an in-house infrastructure where data is encrypted on servers under your own roof, with cloud servers the process is a little bit different.

As such, encryption for EC2 volumes is available as a feature through Elastic Block Store (EBS), a popular storage system for flexible virtual data that many companies will utilise to store their sensitive data, such as databases and images. Here, encryption is applied to the servers that host the EC2 instances, and as data moves between your instances and the accompanying EBS storage.

It’s also worth mentioning that if you’re an S3 user, Amazon also provide additional services for server-side and client-side encryption which you may find useful.

5) AWS Regions – One More for Good Measure!

It’s also worth acknowledging that while AWS datacentres are pretty advanced technology-wise, they’re still subject to the same vulnerabilities that affect other businesses, such as power outages and software problems. Therefore by keeping a spread of your EC2 instances across different AWS regions, this can avoid the problem of a complete system outage if something goes wrong.

So there you have it; five considerations to help improve your Amazon EC2 security. But ultimately, to maintain a secure AWS cloud as your EC2 usage starts to grow, you should restrict AWS resources and closely govern EC2 permissions to form solid security practices.

utilisation monitoring increases cloud efficiency

Companies often move to the cloud to benefit from Cloud Computing’s ability to reduce costs and improve efficiency. But when cloud computing is applied across huge IT infrastructure, managers need to find a way to keep an eye on everything that’s going on.

One of the main advantages of cloud computing is its flexibility and ability to grow with your company. And as you grow, you need to make important decisions regarding where additional capacity is needed.
But to do this, you need to have an idea of how cloud resources are being deployed across your IT infrastructure, so you have greater visibility into which applications need more resources. That’s why it’s important for organizations to implement the right monitoring tools.

For example, monitoring CPU usage for particular tasks enables an organization to see which tasks require more resources and which tasks don’t need as much. This can be particularly important when you have applications that periodically spike in usage, and so you need to be in a position where you can allocate enough resources to this application.

These monitoring tools can also be used to assess application response times, page load speeds and availability of the service etc. in order to gain an understanding of how end-users experience the service that you provide. This allows you to narrow down on where improvements can be made within your cloud computing infrastructure to improve efficiency in the use of cloud resources as well as end-user experience.

You could even think about having systems in place that accommodate predictive analytics so that you can gain an understanding of memory utilization etc. to enable you to implement changes to computing resources in advance. The advantage here is that you can prepare your cloud computing resources to be available for applications that will increase in demand.

Monitoring the utilization of cloud computing resources is also an important way to ensure that you are getting exactly what you pay for from cloud providers and that they’re adhering to the Service Level Agreement (SLA).

Hence using monitoring tools, you can assess levels of performance as experienced by users as well as assess what has caused any of the issues that are detracting from a good user experience. This information can then be used to ascertain whether any problems have been caused by the cloud vendor or how the application has been designed.

What this then gives you is a better understanding of your ROI from your cloud infrastructure, and where you can make changes in order to boost your ROI.

Ultimately, utilization monitoring gives you a thorough understanding of how efficiently you are using your cloud resources. By gaining greater visibility into where cloud resources can be used more efficiently, you can use this information to improve your cloud infrastructure and ultimately provide a better experience for end users.

For many small businesses and start-ups looking to venture into cloud computing, the Amazon Web Services (AWS) platform, with its accessible low-cost pricing structure, makes it easy to test the waters of cloud computing without having to commit too much.

Reduce AWS Small Business CostsBecause Amazon services are relatively cheap, an AWS small business will often view the costs associated with server provisioning as almost minor when compared to the cost of buying the infrastructure for themselves, or against the larger overall budget of a project.

However, such thinking minimizes the significance of EC2 server costs, and it becomes all too easy for project managers to spin up instances without proper consideration of the bigger picture. Specifically, the danger here is that it can result in resource over-provisioning and user ‘sprawl’.

The problem then becomes worsened as projects move forward and extra EC2 servers are spun-up but not switched off (after testing, for example), which ultimately results in an expensive AWS bill landing on the desk of someone in finance. It is only at this point that realisation dawns of just how quickly EC2 costs can stack-up!

AWS Small Business Frustrations

The causes of an unexpectedly large EC2 server bill usually develop within the early stages of AWS usage, when it’s not uncommon for small companies to skip managing their AWS account properly to get to the heart of what they really want to do – developing and testing.

In fact, smaller AWS companies should look to lower their incoming costs at the very start by flexibly matching their provisioning against their actual usage, and examine the areas that actually affect their AWS EC2 costs, such as:

  1. Server platforms – what operating system does your server need?
  2. Server regions and availability zones – where is it needed geographically?
  3. Server types – what is your server going to do?
  4. Server size – how large or small does your server need to be?
  5. Server tagging – have you tagged you servers for accurate costs allocation?

However, to newly initiated AWS users, such planning isn’t always obvious and it is why Amazon looks to resolve such issues with their library of features available from the AWS management console – which help to control usage and review costs with features such as Cloud Watch, Auto Scaling, and their Trusted Advisor Dashboard, all of which support Amazon’s mantra towards optimizing cloud usage: provision correctly, provision elastically, and save more!

Which is great if you’re a large organisation with the resources and DevOps expertise on-hand to utilise these features as needed. But for an AWS start-up with limited resources and not many developers, the solutions available aren’t practical or quick, and often divert your human resources away from the real projects, which in turn reduces your bottom line productivity.

Essentially, to reduce costs you should provision correctly and turn your servers off when not in use. As such, a common workaround for small AWS companies is to write or deploy their own script for EC2 scheduling or automation, which for example turn servers off when people go home for the day. Useful under the right circumstances, but how often do things go to plan? Chances are that a schedule will clash when someone is working, and you’ll need to pull someone away to look at it again.

AWS Small Business Needs

The needs of an AWS small business are fundamentally different to those of a larger organisation.

While many small businesses support the AWS value proposition and enjoy the same benefits as large companies, they’ll rarely engage as deeply with the platform as the big boys because of their limited resources and the fact they’ve got fewer projects that require the breadth of utilities on offer. All of which isn’t helped by the platform being notably complex, with intricate dashboards and AWS terminology used throughout its design.

As a result, AWS small businesses will often have a tough-time justifying any increased use of AWS and will look for a solution that simplifies its usage even further.

Third party tools can help in this respect, offering user-friendly controls to help manage servers, or supplement home-grown scripts, with simple-to-use EC2 features that provide on-demand control, automation and scheduling – that optimize cloud usage without you having to get your hands dirty.

Overall, Amazon offers some very good solutions for big users, but for everyone else a clear EC2 management solution can often take the technical headaches away.

Discover CMM For Small Business

AWS CostsAmazon’s EC2 servers are notionally cheap. They are also easy to provision. So, in theory, there is little to go wrong.

You provision a few servers, you pay some money based on what you use and that’s it. It’s a cheaper, easier and quicker way of provisioning servers than the old fashioned way of buying in hardware and setting up your own servers.

So if they are so cheap and easy to set up and manage, why do so many people get big bills (not all of which were expected)?

The easy answer is; because they provisioned a lot of servers. And there is probably some truth to this. The more servers you provision, the larger your bill will become. But another reason is because of misunderstanding of costs!

At this point, you should be reminded that there are multiple ways of provisioning Amazon EC2 servers. You can use reserved instances, in which you effectively pay for a discount by provisioning a server for 1-3 years, usually with an upfront cost.

You can provision spot instances, where you bid for spare server capacity and effectively use the server until the price of the server rises above your maximum bid price.

And then there are EC2 servers on-demand, which is a common way for developers to spin up new servers. In effect, you pay for on-demand servers by the hour for the time they are switched on. (Notice you pay by the time they are switched on, not for the time they are actually used – this catches some people out).

EC2 Costs Made Easy 

Each of these three ways to provision EC2 servers become a little more complex when it comes to actually spinning the servers up, and that’s because there are different factors that need to be considered, each factor can then alter the cost of provisioning the servers.

Platform

The first factor to consider is the server platform. By this, we mean what operating system or platform do you need the server to run. With AWS you can provision servers for:

  • Linux
  • RHEL
  • SLES
  • Windows
  • Windows with SQL Standard
  • Windows with SQL Web
  • Windows with SQL Enterprise

Each of these platforms will cost different amounts in conjunction with the rest of the server set-up.

Region 

The prices for EC2 servers vary across regions and a Linux EC2 server might cost more in US West than US East. In some cases you might be able to be flexible with the region but you might also have to stick to certain regions based on compliance and regulations.

For example, in Europe, there are rules to say European data should be kept on servers based in Europe.

Type of Server 

Type of server is very important, as it needs to be set up correctly based on what the server will be used for. The different types of EC2 server that can be provisioned are:

  • General Purpose
  • Compute Optimised
  • GPU Instances
  • Memory Optimised
  • Storage Optimised

Each of these types of server are broken down further into different specifications and then different sizes.

These different types of servers cost different amounts but you may be limited to what you can spin up based on the type of work being carried out and what you need the EC2 servers for.

Size of Server

Whilst the type of server might not be very flexible depending on the brief, the size of the server(s) might be slightly more flexible and again, is another cost factor. The sizing of different types of servers varies, for example, you could provision anything from a nano general purpose server to an 8xlarge compute optimised server. All of which will change the cost of what you are provisioning.

Bringing the Costs Together 

Based on these factors, the price of provisioning servers will vary. When it comes to provisioning the servers, you need to know exactly what you’re looking for so you can pick the best option for you or your business.

This will then influence the cost and you can be sure to understand why your AWS bill is what it is. Where this sometimes becomes a problem is when there is miscommunication between the finance people paying for the servers and the developers who provision the servers.

If you are one of those finance people, for every server provisioned, find out exactly what the set up was so you can budget appropriately for it.

You can see the full AWS EC2 pricing list here.

Spot InstancesBusiness owners often look for ways to reduce their organizational costs and become more efficient with the budget they have. Migrating to the cloud has become a popular strategy to reduce costs, but even within cloud computing, having the right cloud configuration can help you to save even further on your cloud computing costs.

One of the ways Amazon Web Services (AWS) EC2 users can save on cloud computing bills is through provisioning Spot Instances. Spot Instances are one of 3 purchasing options AWS provides EC2 customers; On-Demand Instances, Reserved Instances and Spot Instances.

So what are Spot Instances? Spot Instances allow EC2 users to bid on unused EC2 capacity through an online marketplace. Users can set an hourly price that you’re are willing to pay for computing capacity, but the price of these EC2 instances fluctuates with supply and demand. But don’t worry, as you never pay more than the maximum price you’ve set – if the Spot Instance price exceeds your maximum limit, your instance will be shut down.

So when is it best to use Spot Instances? Spot Instances are best for applications that you don’t mind being interrupted – remember, if the Spot Price exceeds your maximum bid, the instance will be terminated.

Spot Instances are also great for running simulations completed as batch jobs, in financial services for running analyses such as wealth management and even in geospatial analysis for satellite image processing.

One of the greatest benefits of Spot Instances is that you can save 50-90% on your EC2 costs compared to On-Demand Instances. For example, the On-Demand price for an m4.xlarge instance is $0.239 per hour, but the Spot price for this instance is $0.0358 per hour (for Linux/UNIX Usage, US East [N. Virginia] region, as of 28th January). And because instances are terminated if the price exceeds your maximum bid, you won’t pay any more than you should.

Spot Instances also have a Spot Fleet feature, where if you need quite a bit of computing capacity, you can launch a number of Spot Instances that can be purchased for a low price. The advantage here is that within your Spot Fleet request, you can use a ‘lowest price’ strategy, meaning the Spot Instances that you obtain come from the Spot Pool with the lowest price.

But there’s a particular strategy you can use to minimize the chances that your Spot Instances will be interrupted. Rather than use the ‘lowest price’ strategy, a ‘diversified’ strategy ensures that the Spot Instances your provision come from a range of pools. Therefore, if you provision 200 instances, 20 from 10 different pools, if the price of one pool exceeds your maximum bid, then only 10% of your instances are affected. Pretty cool huh?

But you’re probably asking yourselves, ‘how do I know how much to bid for my Spot Instances?’ Well the clever people at AWS have developed a Spot Bid Advisor that uses Spot Bid history to help you to determine a bid price that is specific for your needs. The Spot Bid Advisor gives you information on how likely you are to being outbid for particular instance types, allowing you to make more informed decisions on what instance types you should bid for.

Ultimately, Spot Instances are a great cost-saving tool for AWS EC2 users who need to cut costs wherever they can. With Spot Instances, you can provision instances for your computing needs without the fear of exceeding any price limits that you set.