AZ Cheat Sheet
Login and Subscription
- Login
az login --use-device-code
- Show subscription
az account show
- List subscription
az account subscription list
- Set Subscription
az account set --subscription SubscriptionName
- Login SPN (Details see below)
az login --service-principal --username "..............................." --password '...............................' --tenant "..............................."
Start/Stop/Deallocate
- Start
az vm start -g MyResourceGroup -n MyVm
- Start all from Resource Group
az vm start --ids $(az vm list -g MyResourceGroup --query "[].id" -o tsv)
- Stop
az vm stop -g MyResourceGroup -n MyVm
- Stop all from Resource Group
az vm stop --ids $(az vm list -g MyResourceGroup --query "[].id" -o tsv)
- Deallocate
az vm deallocate -g RGName -n VMName
Create VM
az vm create \ -n ${VMNAME} \ -g ${RGNAME} \ --image ${OSIMAGE} \ --admin-username ${username} \ --admin-password ${password} \ -l ${LOCATION} \ --size ${VMSIZE} \ -o table
Get Public IP
az vm show -d -g RGName -n VMName --query publicIps -o tsv
Storage-Account
- List Storage Account
az storage account list --query [*].name --output tsv az storage account list --query [*].primaryLocation --output tsv az storage account list --query [*].resourceGroup --query [*].name
- Create Storage Account
az storage account create --location eastus --name ContainerName --resource-group RG --sku Standard_RAGRS --kind BlobStorage --access-tier Hot
- Delete Storage-Account
az storage account delete --name Storage-Account-Name -y az storage account delete --name Storage-Account-Name --resource-group RG az storage account delete --name Storage-Account-Name --resource-group RG -y
- Get Keys
az storage account keys list --account-name ContainerName --output table az storage account keys list --resource-group RG --account-name ContainerName --output table
- Create Container
az storage container create --name Container-Name --account-name Storage-Account-Name --account-key xyz...==
- List Container
az storage container list --account-name Storage-Account-Name --account-key xyz...== --output table
- Delete Container
az storage container delete --account-name Storage-Account-Name --account-key xyz...== --output table
- Copy Data from local to Container
az storage blob upload-batch --destination Container-Name --pattern "*.exe" --source "c:\Users\admin\Downloads" --account-name Storage-Account-Name --account-key xyz...==
- Copy data between two container within the same Storage-Account
az storage blob copy start-batch --destination-container Container-Name --account-name Storage-Account-Name --account-key xyz...== --source-account-name Storage-Account-Name --source-account-key xyz...== --source-container Container-Name
- Copy data between two storage accounts
az storage blob copy start-batch --destination-container Container-Name --account-name Storage-Account-Name --account-key xyz...== --source-account-name Storage-Account-Name --source-account-key xyz...== --source-container ContainerName
- List Blob data (BASH)
az storage blob list -c Container-Name --account-name Storage-Account-Name --account-key xyz...==
- List Blob data (BASH), Filenames only
az storage blob list -c Container-Name --account-name Storage-Account-Name --account-key xyz...== --query [*].name --output tsv
- List Blob data and put it into an Array (BASH), watch the query and output
BLOBS=$(az storage blob list -c Container-Name --account-name Storage-Account-Name --account-key xyz...== --query [*].name --output tsv)
- List Array data
for BLOB in $BLOBS do echo "$BLOB" done
- List Array data and download to /mnt/d/test/
for BLOB in $BLOBS do echo "Download: $BLOB" az storage blob download -n $BLOB -f /mnt/d/test/$BLOB -c ContainerName --account-name StorageAccountName --account-key xyz...== done
- Delete BLOB
az storage blob delete-batch --source ContainerName --pattern '*.gz' --account-name Storage-Account-Name --account-key xyz...==
SAS Keys
- Create SAS Token on BLOB
az storage blob generate-sas \ --account-name Storage-Account-Name \ --account-key xyz...== \ --container-name Container-Name \ --name file-Name \ --permissions acdrw \ --expiry 2021-01-18
- Test
https://<StorageAccount-Name>.blob.core.windows.net/<Container-Name>/<FileName>?xyz...==
- /Create SAS Token on Container
Important: You need to add yourself to the role Storage Blob Data Contributor, it will NOT WORK if you skip this step, for mor informattion see: https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-portal
az storage container generate-sas --account-name Storage-Account-Name --name Container-Name --permissions acdlrw --expiry 2021-01-20 --auth-mode login --as-user
- Note that the key MUST contain ske and sig , otherwise the key is INVALID, Valid return looks like:
"se=2021-01-20&sp=racwdl&sv=2018-11-09&sr=c&skoid=139be1d7-4d53-4d8a-90e2-cc7ace745ad1&sktid=71d4c841-93b3-47b4-ab47-88c7d11f56d2&skt=2021-01-16T19%3A37%3A54Z&ske=2021-01-20T00%3A00%3A00Z&sks=b&skv=2018-11-09&sig=LMhhiOrZoEQPxgaemkKOZ2eY8W6Ee4ZE5zEWZu2y4Js%3D"
- Use AZ without a login to enumerate all blobs
az storage blob list -c vm1 --account-name Container-Name --account-key xyz...==
Snapshots
- Get DiskID
diskID=$(az vm show --resource-group "MyResourceGroup" --name "MyVMName" --query "storageProfile.osDisk.managedDisk.id" | grep -oP 'disks/\K.+' | rev | cut -c2- | rev)
/Create a date string
now=$(date -u +"%Y-%m-%d-%H-%M")
- Create Snaphost
az snapshot create --name "Snapshot_$now" --resource-group "MyResourceGroup" --source $diskID
- List Snapshot
az snapshot list --resource-group "MyResourceGroup"
- Delete Snapshot:
az snapshot delete --ids "<snapshot_id>"
Copy Snapshot to Storage Account
Note: To do this you need to have or need to convert your Storage into a General Purpose v2 account
- Provide the subscription Id where snapshot is created
subscriptionId="..."
- Provide the name of your resource group where snapshot is created
resourceGroupName="..."
- Provide the snapshot name
snapshotName="Snapshot_2021-01-22-18-31"
- Provide Shared Access Signature (SAS) expiry duration in seconds e.g. 3600.
- Learn more about SAS here: https://docs.microsoft.com/en-us/azure/storage/storage-dotnet-shared-access-signature-part-1
sasExpiryDuration=3600
- Provide storage account name where you want to copy the snapshot.
storageAccountName="StorageAccountName"
- Name of the storage container where the downloaded snapshot will be stored
storageContainerName="ContainerName"
- Provide the key of the storage account where you want to copy snapshot.
storageAccountKey="...."
- Provide the name of the VHD file to which snapshot will be copied.
destinationVHDFileName="ubuntutest.vhdx"
- Optional set your Subscription ID
az account set --subscription $subscriptionId
- Get an SAS Token
sas=$(az snapshot grant-access --resource-group $resourceGroupName --name $snapshotName --duration-in-seconds $sasExpiryDuration --query [accessSas] -o tsv)
- Copy your Snapshot to your Storage Account
az storage blob copy start --destination-blob $destinationVHDFileName --destination-container $storageContainerName --account-name $storageAccountName --account-key $storageAccountKey --source-uri $sas
List Images
az vm image list --offer Debian --all --output table
Run remote command
az vm run-command invoke -g ResourceGroup -n VMName --command-id RunShellScript --scripts "ps -e"
Service Principle Name
- Create an SPN with assigning a role for storage account
az ad sp create-for-rbac --name spnadmin01 --role "Storage Blob Data Contributor"
- Remember the credentials!
{ "appId": "...............................", "displayName": "spnadmin01", "name": "http://spnadmin01", "password": "...............................", "tenant": "..............................." }
- Login
az login --service-principal --username "..............................." --password '...............................' --tenant "..............................."
- List SPN
az role assignment list
- List available Azure roles
az role definition list --out table
Disk Management
Note that this change cannot be reversed
- Get DiskID first:
az vm show -d -g RGName -n VMName --query "storageProfile.osDisk.managedDisk.id"
- Deallocate the VM
azure vm deallocate -g resource-group -n vmname
- Resize OS or Data Disk to 50GB
- Note: When resizing the OS disk on a Linux machine than the disk will get expanded automatically, else a manual expand step is required later.
az disk update --name DiskID --resource-group default --size-gb 50
- Create a new Disk
az disk create -n myDataDisk01 -g default --size-gb 50
- Optional update and encrypt the disk using your own disk-encryption-set
az disk update -n myDataDisk01 -g default --encryption-type EncryptionAtRestWithCustomerKey --disk-encryption-set DESName
- Optional show the encryption status
az disk show -g default -n myDataDisk01 --query [encryption.type] -o tsv
- Attach disk to running VM
az vm disk attach --resource-group default --vm-name VMName --name myDataDisk01
- Verify attached disk
az vm show -g default -n VMName --query storageProfile.dataDisks -o table
- Detach disk from running VM
az vm disk detach --resource-group default --vm-name VMName --name myDataDisk01
- Delete Disk
*az disk delete -n myDataDisk01 -g default
Workshop
Create a StorageAccount and SAS Keys for backup purposes
- Create Container
az storage account create --location eastus --name <storage-account> --resource-group <resource-group-name> --sku Standard_LRS --kind BlobStorage --access-tier Cool
- Get Keys
az storage account keys list -n <storage-account>
- Create Container
az storage container create --name <container-name> --account-name <storage-account> --account-key xyz...==
Create a SAS Key for contributor
az storage container generate-sas --account-name <storage-account> --expiry 2025-01-01 --name <container-name> --permissions acdlrw --account-key xyz....== "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig....."
-Copy test data to container
az storage blob upload-batch --destination <container-name> --pattern "hosts" --source "/etc" --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig..."
-List data
az storage blob list -c <container-name> --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig=..."
-Delete data
az storage blob delete-batch --source <container-name> --pattern 'ldd*' --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig=....."
Create a SAS Key for the backup user
The backup user will be limited to write only
az storage container generate-sas --account-name <storage-account> --expiry 2025-01-01 --name <container-name> --permissions w --account-key xyz..== "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=...."
-Copy data to container by using the backup (write premission) key
az storage blob upload-batch --destination <container-name> --pattern "hosts" --source "/etc" --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=....."
-List data - this is not working by purpose!!!
az storage blob list -c <container-name> --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=...."
-Delete data - this is not working by purpose!!!
az storage blob delete-batch --source <container-name> --pattern 'issue*' --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=......"