Azure Premium storage is a relatively new Azure feature that lets you replace your standard storage account with high performance, lower latency storage at an additional cost. Throughput of up to 200MB/s can be achieved with a single disk, and even more if multiple disks are combined on a single server.
This all sounds great, but users of Premium Storage may find themselves disappointing when first testing this hardware, and receive less than advertised performance.
Back in the days of cloud services every VM created got a set of default endpoints that let in traffic for RDP and Remoting on a random port, and if you wanted ingress on other ports you just created more endpoints. In the V2 world cloud services don’t exist, and endpoints are now primary configured as inbound NAT rules on a load balancer, with the default being no NAT rules. This is ultimately a much better and more secure way of doing it, however it did make it a pain when you needed to add one or more NAT rule for each VM in your deployment, especially when you had lots of VM’s.
Anyone using the Azure DSC Extension for ARM deployments with the “AutoUpgradeMinorVersion” property set to true will notice that the extension version has gone up to 2.9.1. No official announcement as yet on what is in it, but testing has shown that at a minimum this includes a fix for an issue with server 2016 TP4 OS’s where DSC deployments would fail with the error:
ConvertTo-MOFInstance : System.ArgumentException error processing property 'Password' OF TYPE 'MSFT_Credential': Certificate 'AAAAAAAAAAAAAAAAAAA' cannot be used for encryption.
I’ve recently started using the CentOS Linux VM’s in Azure for some work that requires Linux. One thing I noticed quickly after starting to use them was that these VM’s do not come with any swap space configured by default.
You can go ahead and create a swap partition using the standard methods, but when you reboot the VM it won’t perisist, which was a bit infuriating. That was until I discovered that you can make this change in the Azure Linux agent that is installed by default on each VM and it will take care of it for you.
It’s often necessary to configure Azure virtual machines to use a consistent outbound IP address, to connect to another resource with an IP based whitelist. In a classic cloud service based deployment this was easy, all of the VM’s in the cloud service used the cloud services IP for outbound traffic and all was well.
In an Azure Resource Manager (ARM) deployment things are different. There’s no concept of cloud services and public IP’s are assigned to VM’s or load balencers.
On Tuesday at Ignite Microsoft announced a number of new Azure services, one of which was Azure DNS. The ability to host your DNS zones has been a feature of cloud providers like AWS for some time now, so it’s good to see Microsoft adding in this essential component for providing web services.
Before venturing into using Azure DNS there are a few things to be aware of, firstly it’s not (yet) available in the management portal, you can use it via PowerShell, .
Microsoft recently announced a preview of SQL Azure Elastic Database Pools. This new service allows you to create a pool of database that share the same underlying resource. This is likely to be especially useful for SaaS providers where clients are often separated by database, but don’t require all of the throughput and storage provided by a single Azure database. Using elastic database pools you can create a pool of specified resource and place your databases into it.
Picking the right Azure data centre to host your application in is critical to getting decent levels of latency. In some cases it’s very clear which data centre to use based on your audience location, but what about when things aren’t so clear? If your in Europe, which of the two locations is best? If your in the Middle East, where there are no Azure data centres, how do you find out which is the best to use?