The Developer space is using the Blog Mode!
View this blog in blog mode

In the Dimension Data Cloud, you can enable file and application backups for your Virtual Machines for Linux and Windows. 

For this example I will show you how to use some of the commands to get a list of servers that have backup, find the machines that have a File System agent (Windows or Linux), check if they have a running job and start a new job if not.

The scenario for this example is you want to run an update on your servers, like a kernel patch or a Windows update. Before you do so, you want to run a backup of all your machines.

Load the module and connect to the API

 

Now, you want to get a list of all the servers that have backup. Get the servers in your account and filter to those that have something in the backup field.

 

For each of those servers, get a list of backup clients where the agent type (e.g. File System, SQL Server, MySQL) is the File System Agent. This is either FA.LINUX or FA.WIN.

    
 

For each client, because you can have more than 1 file system client on a server, check if there is a running job by looking at the runningJob property of the client. If there is a job running, issue a warning (you can use this as an opportunity to cancel the job using " Remove-CaaSBackupJobAgent -Server $server -BackupClient $fileClient " but for my example I won't. Finally, issue a new request to start a backup using the New-CaasBackupJob command.

    

So in summary, the script looks like:

In the last post I showed how to use the PowerShell module for CaaS to perform some basic operations. 
A common benefit of cloud is touted as "you can save $ by turning off your servers when you're not using them". 
So, to demonstrate how that can be done here with a quick recipe. 
I'm assuming you only want to do this on your Development environments and not on production, so firstly, pick the network where your lab/testing servers are stored. I have a network called "Ant Dev" with my test servers inside. I don't need them to be running over the weekend. 

Now, get all the servers in that network by searching for servers in that network Id.
 


Now, wrap this script up into a ps1 file and store somewhere you can call it from Scheduled tasks. I've used c:\Users\Anthony\Documents\caas-sleep.ps1 for this example 
Copy the code below into ISE and set the user and file location to meet your requirements 

This will create a Scheduled task in Windows to run the job at 6pm every Friday. Customise the script above to set the 'wake-up' version for late Sunday night. Set-CaasServerState takes -Action [PowerOff | PowerOn | Restart | Shutdown] PowerOff and PowerOn are the hard-off options and restart and shutdown use VMware tools to initiate a clean shutdown or reboot.

n addition to the .NET client library, a Ruby alternative has also been developed. It can be used to automate various tasks, such as setting up and tearing down different environments on the cloud platform.
The SDK is available as a ruby gem. The code itself can be found on github. In this post, we’ll set up an Ubuntu server and connect it to the internet so we can reach it via ssh.
Before we can start, we need to have both ruby and the SDK on our system. If you still need to install ruby, you can find instructions here. Installing the SDK itself is easy via gem:
Now that the SDK is installed, we can start building our environment. You’ll see that it’s quite straightforward, we can even do it right in the console. Let’s fire up irb (the interactive ruby shell) :
We first load the SDK in the session and define a couple of variables which hold our login information. I’m logging in via the European portal, so my url starts with « api-eu ».  Different possibilities are :
·        North America : api-na
·        Australia : api-au
·        Africa : api-mea
·        Asia Pacific : api-ap
The org_id is a unique string which specifies your organization. You’ll find it at the top of your account page in the cloud web portal.
With this information, we can create a client to communicate with the cloud API:
The client itself is not yet connected to the API, but will use the credentials you provided to execute requests on your behalf.
To create a new machine and connect it to the internet, we need to complete 3 steps:
1)      Create a network
2)      Create and start the cloud server
3)      Connect it to the internet
We’ll go right ahead and create a network:
When creating a network, we need to specify a name and optionally a description. The network is created before the function returns, so the request can take some time. You can check the status of the network by calling the « show_by_name » method.
We’ll now create the cloud server and place it in the network. We again need to provide a name and a description, as well as a network id and an image id. The network id can be extracted from the « show_by_name » call. The image id specifies which operating system is preinstalled on your new cloud server. Dimension Data provides a rather large list of operating systems which you can find in the cloud portal. In this case, we’re interested in an Ubuntu server. The id for this image is: d4ed6d40-e2f0-11e2-84e5-180373fb68df
Putting this all together:
The server creation is not instantaneous, we can follow along as follows:
The server is correctly deployed and started when both the « is_deployed » and « is_started » variables are true.
Finally, we need to connect the network to the internet and open port 22 to allow ssh access. For this, we’ll have to create an aclrule (to open port 22) and create a natrule (to map our internal address to an external ip). Let’s go:
We have to provide the network id, give the rule a name and provide the internal ip of our server. In this case, we extracted the ip from the server « show_by_name » method. Notice that the API returns the public ip in the « nat_ip » variable. Let’s note this ip, we’ll need it to access our server later.
Creating the aclrule is a bit more involved :
We again provide the network id and a name. We also have to provide a position (105, which in this case will be the last rule before « drop any any »). The next argument specifies that the rule is inbound. We also specify the protocol (tcp) and the port (22). Finally, we specify that the rule is an « Allow » rule (true).
Our cloud server is now available on the public ip we noted before. Let’s try to access it via ssh. By default, the password is « verysecurepassword », which is actually not that secure. It is recommended to change this as soon as possible .
  Graph this data and manage this system at https://landscape.canonical.com/
root@10-240-215-11:~#
There you have it, you’ve installed your first cloud server via the API!

For this post I will be demonstrating how to leverage the AWS API to export images that you have added to EC2 and export them as OVA files.

Once you have exported these OVA files you can then upload them to the CaaS Portal via some new commands in the PowerShell module version 1.3.1

Firstly, you will of course need an EC2 account and an instance to test on.

You will also need to install the AWS PowerShell module: http://aws.amazon.com/powershell/

We have a new example script for demonstrating this process. Export Images from AWS to CaaS.ps1

# Requirements - install PowerShell for AWS 
# http://aws.amazon.com/powershell/ 

## Replace these with your target image details.. 

$region = "ap-southeast-2" 
$transferBucket = "export-examples-dd" # Note you need to add 
$instanceToMove = "i-fgj7kblz" # Name of the Amazon instance to migrate. 
$targetNetworkName = "EXAMPLE-WEB" # Name of the network you want to migrate to 
$targetLocation = "AU1" # CaaS location to migrate to. 
$targetImageName = "Example-WEB" # Name of the virtual machine you want to create 

# Import the AWS PowerShell Commands. 
Import-Module AWSPowerShell 


The next step is to import a set of utility commands (shortcuts for EC2 exports) in a file called AWS_Utilities.
# Import the AWS Utilities for CaaS
. .\AWS_Utilities.ps1 

In Amazon, you can create API Keys you will need to update those in the script with your own

$akid = "AKIAIFY64H7FAB2L712D" # Replace with your API Key ID 
$aksec = "wLmkFh3Ow177Moy1asdu2kcoyc3jCWSs7wWA" # Key Secret from the UI 
Once you have established the keys, you need to choose your instance ID.

Now create the API connection to AWS


# Setup AWS API Set-DefaultAWSRegion -Region $region Set-AWSCredentials -AccessKey $akid -SecretKey $aksec -StoreAs "export"
 
 
Then you can export your image to S3

 
$exportJob = ExportImageFromAWS -region $region -bucketName $transferBucket -instanceId $instanceToMove DownloadAWSExport -region $region -bucketName $transferBucket -exportId $exportJob
 
 
For my 8GB image, this took about 2 hours to complete. You will see the OVA file in your local folder, with the disk and configuration inside.
Once you have the OVA you can upload and import it into CaaS using the New-CaaSUploadCustomerImage command.
Normally you would have to use an FTP client, but we have built one into the command, so it uploads the files for you.

New-CaaSConnection -Region Australia_AU -Vendor DimensionData -ApiCredentials (Get-Credential)
# Upload our virtual appliance. New-CaasUploadCustomerImage -VirtualAppliance "$exportJob.ova"
 
 

 
Then finally, import that image into a new template (Customer Image)

 
# import OVF into CaaS. $package = Get-CaasOvfPackages | Where { $_.name -eq "$exportJob.mf" } $network = Get-CaasNetworks -Name $targetNetworkName -Location $targetLocation
 
 
If you login to the Portal you will see the image, but you can of course do this via PowerShell. 
 
New-CaasImportCustomerImage -CustomerImageName $targetImageName -OvfPackage $package -Network $network -Description "Imported image from AWS - $instanceToMove"
 
 
A video demonstration of this process is available here:

Apache have a project called 'libcloud', which is a multi-cloud library for python. Dimension Data Cloud (formerly known as OpSource cloud) has been a supported platform for many years.

Here is how you download and use libcloud with our API.

Install libcloud using PIP
>pip install apache-libcloud

Open up the Python shell
>python

import the libcloud project
>>>from pprint import pprint
>>>from libcloud.compute.types import Provider
>>>from libcloud.compute.providers import get_driver

Set the driver as OpSource (Dimension Data Cloud)
>>>cls = get_driver(Provider.OPSOURCE)

Turn off SSL validation for testing, or read https://libcloud.readthedocs.org/en/latest/other/ssl-certificate-validation.html
>>>import libcloud.security
>>>libcloud.security.VERIFY_SSL_CERT = False

Set your credentials
>>>driver = cls('myusername','mypassword')

List the servers in your account.
>>> pprint(driver.list_nodes())

 [<Node: uuid=8fa6750b7829b224c6b1f252decfc80a61fae424, name=ExchN1, state=TERMINATED, public_ips=[], private_ips=10.172.
132.12, provider=Opsource ...>,
 <Node: uuid=db9505bf21d0811fc99d70c9a9ac72e0695a573d, name=ExchN2, state=TERMINATED, public_ips=[], private_ips=10.172.
132.14, provider=Opsource ...>,

Note: LibCloud only supports the NA region at present.

Further details are available on the libcloud website..
https://libcloud.readthedocs.org/en/latest/compute/index.html

To back the launch of the NA9 MCP2.0 functionality, we have extended the PowerShell commands to have MCP 2.0 commands.

  • New-CaasNetworkDomain
  • New-CaasVlan
  • Get-CaasVlan
  • Get-CaasNetworkDomain

 

  • New-CaasServerOnNetworkDomain

Provisioning a Network Domain.

 


Get the network domains in your MCP 2.0 datacenters


Create a VLAN in your MCP 2.0 datacenters.

 


There is a new command to see the VLANs provisioned in your MCP 2.0 datacenters.



 And of course, your existing functionality will still work.