AZ Cheat Sheet: Difference between revisions
Jump to navigation
Jump to search
(19 intermediate revisions by the same user not shown) | |||
Line 72: | Line 72: | ||
*Copy Data from local to Container | *Copy Data from local to Container | ||
az storage blob upload-batch --destination Container-Name --pattern "*.exe" --source "c:\Users\admin\Downloads" --account-name Storage-Account-Name --account-key xyz...== | az storage blob upload-batch --destination Container-Name --pattern "*.exe" --source "c:\Users\admin\Downloads" --account-name Storage-Account-Name --account-key xyz...== | ||
*Copy Data from Container to local | |||
az storage blob download-batch --destination "/root" --pattern "file.txt" --source vm-www02 --account-name Storage-Account-Name --sas-token 'sp=r&st=2022-02-10T...' | |||
*Copy data between two container within the same Storage-Account | *Copy data between two container within the same Storage-Account | ||
Line 105: | Line 109: | ||
az storage blob delete-batch --source ContainerName --pattern '*.gz' --account-name Storage-Account-Name --account-key xyz...== | az storage blob delete-batch --source ContainerName --pattern '*.gz' --account-name Storage-Account-Name --account-key xyz...== | ||
</pre> | </pre> | ||
=SAS Keys= | =SAS Keys= | ||
Line 132: | Line 135: | ||
--as-user | --as-user | ||
*Note that the key '''MUST''' contain '''ske''' and '''sig''' , otherwise the key is '''INVALID''', Valid return looks like: | *Note that the key '''MUST''' contain '''ske''' and '''sig''' , otherwise the key is '''INVALID''', Valid return looks like: | ||
"se=2021-01-20&sp=racwdl&sv=2018-11-09&sr=c&skoid= | "se=2021-01-20&sp=racwdl&sv=2018-11-09&sr=c&skoid=139....&sktid=71d4&ske=2021-01-20T00%3........&sks=b&skv=2018-11-09&sig=LMh....s%3D" | ||
*Use AZ without a login to enumerate all blobs | |||
az storage blob list -c vm1 --account-name Container-Name --account-key xyz...== | |||
=Snapshots= | =Snapshots= | ||
Line 237: | Line 243: | ||
*Optional update and encrypt the disk using your own '''disk-encryption-set''' | *Optional update and encrypt the disk using your own '''disk-encryption-set''' | ||
az disk update -n myDataDisk01 -g default --encryption-type EncryptionAtRestWithCustomerKey --disk-encryption-set DESName | az disk update -n myDataDisk01 -g default --encryption-type EncryptionAtRestWithCustomerKey --disk-encryption-set DESName | ||
*Optional show the encryption status | |||
az disk show -g default -n myDataDisk01 --query [encryption.type] -o tsv | |||
*Attach disk to running VM | *Attach disk to running VM | ||
Line 249: | Line 258: | ||
*Delete Disk | *Delete Disk | ||
*az disk delete -n myDataDisk01 -g default | *az disk delete -n myDataDisk01 -g default | ||
=Workshop= | |||
==Create a StorageAccount and SAS Keys for backup purposes== | |||
*Create Container | |||
az storage account create --location eastus --name <storage-account> --resource-group <resource-group-name> --sku Standard_LRS --kind BlobStorage --access-tier Cool | |||
*Get Keys | |||
az storage account keys list -n <storage-account> | |||
*Create Container | |||
az storage container create --name <container-name> --account-name <storage-account> --account-key xyz...== | |||
===Create a SAS Key for contributor=== | |||
az storage container generate-sas --account-name <storage-account> --expiry 2025-01-01 --name <container-name> --permissions acdlrw --account-key xyz....== | |||
"se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig....." | |||
-Copy test data to container | |||
az storage blob upload-batch --destination <container-name> --pattern "hosts" --source "/etc" --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig..." | |||
-List data | |||
az storage blob list -c <container-name> --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig=..." | |||
-Delete data | |||
az storage blob delete-batch --source <container-name> --pattern 'ldd*' --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig=....." | |||
===Create a SAS Key for the backup user=== | |||
'''The backup user will be limited to write only''' | |||
az storage container generate-sas --account-name <storage-account> --expiry 2025-01-01 --name <container-name> --permissions w --account-key xyz..== | |||
"se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=...." | |||
-Copy data to container by using the backup (write premission) key | |||
az storage blob upload-batch --destination <container-name> --pattern "hosts" --source "/etc" --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=....." | |||
-List data - '''this is not working by purpose!!!''' | |||
az storage blob list -c <container-name> --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=...." | |||
-Delete data - '''this is not working by purpose!!!''' | |||
az storage blob delete-batch --source <container-name> --pattern 'issue*' --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=......" | |||
=ACI= | |||
*Create Azure container registry | |||
az acr create --resource-group <rg-name> --name <acr-name> --sku Premium --admin-enabled true | |||
*Docker init | |||
az login | |||
az acr login -n <acr-name> | |||
docker pull docker/<image> | |||
docker tag docker/<image> <acr-name>.azurecr.io/<image>:latest | |||
docker push <acr-name>.azurecr.io/<image>:latest | |||
*Create App Service Plan and App | |||
az appservice plan create --name <azappsp-name> --resource-group <rg-name> --is-linux --sku F1 | |||
az webapp create --resource-group <rg-name> --plan <azappsp-name> --name <azapp-name> --deployment-container-image-name <acr-name>.azurecr.io/<image>:latest -s <acr-username> -w <acr-usr-psswd> | |||
=Extensions= | |||
==Install Dependency Agent== | |||
az vm extension set --resource-group <rg-name> --vm-name <vm-name> \ | |||
--name DependencyAgentWindows --publisher Microsoft.Azure.Monitoring.DependencyAgent --version 9.5 | |||
==Remove Extension== | |||
Remove-AzVMExtension -ResourceGroupName <rg-name> -Name "MicrosoftMonitoringAgent" -VMName <vm-name> | |||
==Set Extension for LogAnalytics== | |||
$PublicSettings = @{"workspaceId" = "..."} | |||
$ProtectedSettings = @{"workspaceKey" = "..."} | |||
Set-AzVMExtension -ExtensionName "MicrosoftMonitoringAgent" ` | |||
-ResourceGroupName "rg-name" ` | |||
-VMName "vm-name" ` | |||
-Publisher "Microsoft.EnterpriseCloud.Monitoring" ` | |||
-ExtensionType "MicrosoftMonitoringAgent" ` | |||
-TypeHandlerVersion 1.0 ` | |||
-Settings $PublicSettings ` | |||
-ProtectedSettings $ProtectedSettings ` | |||
-Location WestEurope | |||
=Links= | =Links= | ||
*https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/storage/files/storage-how-to-use-files-cli.md | *https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/storage/files/storage-how-to-use-files-cli.md | ||
*https://github.com/Azure/azure-cli-samples/blob/master/storage/blobs.md | *https://github.com/Azure/azure-cli-samples/blob/master/storage/blobs.md | ||
*https://markheath.net/post/manage-blob-storage-azure-cli |
Latest revision as of 20:58, 10 February 2022
Login and Subscription
- Login
az login --use-device-code
- Show subscription
az account show
- List subscription
az account subscription list
- Set Subscription
az account set --subscription SubscriptionName
- Login SPN (Details see below)
az login --service-principal --username "..............................." --password '...............................' --tenant "..............................."
Start/Stop/Deallocate
- Start
az vm start -g MyResourceGroup -n MyVm
- Start all from Resource Group
az vm start --ids $(az vm list -g MyResourceGroup --query "[].id" -o tsv)
- Stop
az vm stop -g MyResourceGroup -n MyVm
- Stop all from Resource Group
az vm stop --ids $(az vm list -g MyResourceGroup --query "[].id" -o tsv)
- Deallocate
az vm deallocate -g RGName -n VMName
Create VM
az vm create \ -n ${VMNAME} \ -g ${RGNAME} \ --image ${OSIMAGE} \ --admin-username ${username} \ --admin-password ${password} \ -l ${LOCATION} \ --size ${VMSIZE} \ -o table
Get Public IP
az vm show -d -g RGName -n VMName --query publicIps -o tsv
Storage-Account
- List Storage Account
az storage account list --query [*].name --output tsv az storage account list --query [*].primaryLocation --output tsv az storage account list --query [*].resourceGroup --query [*].name
- Create Storage Account
az storage account create --location eastus --name ContainerName --resource-group RG --sku Standard_RAGRS --kind BlobStorage --access-tier Hot
- Delete Storage-Account
az storage account delete --name Storage-Account-Name -y az storage account delete --name Storage-Account-Name --resource-group RG az storage account delete --name Storage-Account-Name --resource-group RG -y
- Get Keys
az storage account keys list --account-name ContainerName --output table az storage account keys list --resource-group RG --account-name ContainerName --output table
- Create Container
az storage container create --name Container-Name --account-name Storage-Account-Name --account-key xyz...==
- List Container
az storage container list --account-name Storage-Account-Name --account-key xyz...== --output table
- Delete Container
az storage container delete --account-name Storage-Account-Name --account-key xyz...== --output table
- Copy Data from local to Container
az storage blob upload-batch --destination Container-Name --pattern "*.exe" --source "c:\Users\admin\Downloads" --account-name Storage-Account-Name --account-key xyz...==
- Copy Data from Container to local
az storage blob download-batch --destination "/root" --pattern "file.txt" --source vm-www02 --account-name Storage-Account-Name --sas-token 'sp=r&st=2022-02-10T...'
- Copy data between two container within the same Storage-Account
az storage blob copy start-batch --destination-container Container-Name --account-name Storage-Account-Name --account-key xyz...== --source-account-name Storage-Account-Name --source-account-key xyz...== --source-container Container-Name
- Copy data between two storage accounts
az storage blob copy start-batch --destination-container Container-Name --account-name Storage-Account-Name --account-key xyz...== --source-account-name Storage-Account-Name --source-account-key xyz...== --source-container ContainerName
- List Blob data (BASH)
az storage blob list -c Container-Name --account-name Storage-Account-Name --account-key xyz...==
- List Blob data (BASH), Filenames only
az storage blob list -c Container-Name --account-name Storage-Account-Name --account-key xyz...== --query [*].name --output tsv
- List Blob data and put it into an Array (BASH), watch the query and output
BLOBS=$(az storage blob list -c Container-Name --account-name Storage-Account-Name --account-key xyz...== --query [*].name --output tsv)
- List Array data
for BLOB in $BLOBS do echo "$BLOB" done
- List Array data and download to /mnt/d/test/
for BLOB in $BLOBS do echo "Download: $BLOB" az storage blob download -n $BLOB -f /mnt/d/test/$BLOB -c ContainerName --account-name StorageAccountName --account-key xyz...== done
- Delete BLOB
az storage blob delete-batch --source ContainerName --pattern '*.gz' --account-name Storage-Account-Name --account-key xyz...==
SAS Keys
- Create SAS Token on BLOB
az storage blob generate-sas \ --account-name Storage-Account-Name \ --account-key xyz...== \ --container-name Container-Name \ --name file-Name \ --permissions acdrw \ --expiry 2021-01-18
- Test
https://<StorageAccount-Name>.blob.core.windows.net/<Container-Name>/<FileName>?xyz...==
- /Create SAS Token on Container
Important: You need to add yourself to the role Storage Blob Data Contributor, it will NOT WORK if you skip this step, for mor informattion see: https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-portal
az storage container generate-sas --account-name Storage-Account-Name --name Container-Name --permissions acdlrw --expiry 2021-01-20 --auth-mode login --as-user
- Note that the key MUST contain ske and sig , otherwise the key is INVALID, Valid return looks like:
"se=2021-01-20&sp=racwdl&sv=2018-11-09&sr=c&skoid=139....&sktid=71d4&ske=2021-01-20T00%3........&sks=b&skv=2018-11-09&sig=LMh....s%3D"
- Use AZ without a login to enumerate all blobs
az storage blob list -c vm1 --account-name Container-Name --account-key xyz...==
Snapshots
- Get DiskID
diskID=$(az vm show --resource-group "MyResourceGroup" --name "MyVMName" --query "storageProfile.osDisk.managedDisk.id" | grep -oP 'disks/\K.+' | rev | cut -c2- | rev)
/Create a date string
now=$(date -u +"%Y-%m-%d-%H-%M")
- Create Snaphost
az snapshot create --name "Snapshot_$now" --resource-group "MyResourceGroup" --source $diskID
- List Snapshot
az snapshot list --resource-group "MyResourceGroup"
- Delete Snapshot:
az snapshot delete --ids "<snapshot_id>"
Copy Snapshot to Storage Account
Note: To do this you need to have or need to convert your Storage into a General Purpose v2 account
- Provide the subscription Id where snapshot is created
subscriptionId="..."
- Provide the name of your resource group where snapshot is created
resourceGroupName="..."
- Provide the snapshot name
snapshotName="Snapshot_2021-01-22-18-31"
- Provide Shared Access Signature (SAS) expiry duration in seconds e.g. 3600.
- Learn more about SAS here: https://docs.microsoft.com/en-us/azure/storage/storage-dotnet-shared-access-signature-part-1
sasExpiryDuration=3600
- Provide storage account name where you want to copy the snapshot.
storageAccountName="StorageAccountName"
- Name of the storage container where the downloaded snapshot will be stored
storageContainerName="ContainerName"
- Provide the key of the storage account where you want to copy snapshot.
storageAccountKey="...."
- Provide the name of the VHD file to which snapshot will be copied.
destinationVHDFileName="ubuntutest.vhdx"
- Optional set your Subscription ID
az account set --subscription $subscriptionId
- Get an SAS Token
sas=$(az snapshot grant-access --resource-group $resourceGroupName --name $snapshotName --duration-in-seconds $sasExpiryDuration --query [accessSas] -o tsv)
- Copy your Snapshot to your Storage Account
az storage blob copy start --destination-blob $destinationVHDFileName --destination-container $storageContainerName --account-name $storageAccountName --account-key $storageAccountKey --source-uri $sas
List Images
az vm image list --offer Debian --all --output table
Run remote command
az vm run-command invoke -g ResourceGroup -n VMName --command-id RunShellScript --scripts "ps -e"
Service Principle Name
- Create an SPN with assigning a role for storage account
az ad sp create-for-rbac --name spnadmin01 --role "Storage Blob Data Contributor"
- Remember the credentials!
{ "appId": "...............................", "displayName": "spnadmin01", "name": "http://spnadmin01", "password": "...............................", "tenant": "..............................." }
- Login
az login --service-principal --username "..............................." --password '...............................' --tenant "..............................."
- List SPN
az role assignment list
- List available Azure roles
az role definition list --out table
Disk Management
Note that this change cannot be reversed
- Get DiskID first:
az vm show -d -g RGName -n VMName --query "storageProfile.osDisk.managedDisk.id"
- Deallocate the VM
azure vm deallocate -g resource-group -n vmname
- Resize OS or Data Disk to 50GB
- Note: When resizing the OS disk on a Linux machine than the disk will get expanded automatically, else a manual expand step is required later.
az disk update --name DiskID --resource-group default --size-gb 50
- Create a new Disk
az disk create -n myDataDisk01 -g default --size-gb 50
- Optional update and encrypt the disk using your own disk-encryption-set
az disk update -n myDataDisk01 -g default --encryption-type EncryptionAtRestWithCustomerKey --disk-encryption-set DESName
- Optional show the encryption status
az disk show -g default -n myDataDisk01 --query [encryption.type] -o tsv
- Attach disk to running VM
az vm disk attach --resource-group default --vm-name VMName --name myDataDisk01
- Verify attached disk
az vm show -g default -n VMName --query storageProfile.dataDisks -o table
- Detach disk from running VM
az vm disk detach --resource-group default --vm-name VMName --name myDataDisk01
- Delete Disk
*az disk delete -n myDataDisk01 -g default
Workshop
Create a StorageAccount and SAS Keys for backup purposes
- Create Container
az storage account create --location eastus --name <storage-account> --resource-group <resource-group-name> --sku Standard_LRS --kind BlobStorage --access-tier Cool
- Get Keys
az storage account keys list -n <storage-account>
- Create Container
az storage container create --name <container-name> --account-name <storage-account> --account-key xyz...==
Create a SAS Key for contributor
az storage container generate-sas --account-name <storage-account> --expiry 2025-01-01 --name <container-name> --permissions acdlrw --account-key xyz....== "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig....."
-Copy test data to container
az storage blob upload-batch --destination <container-name> --pattern "hosts" --source "/etc" --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig..."
-List data
az storage blob list -c <container-name> --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig=..."
-Delete data
az storage blob delete-batch --source <container-name> --pattern 'ldd*' --account-name <storage-account> --sas-token "se=2025-01-01&sp=rwdl&sv=2018-11-09&sr=c&sig=....."
Create a SAS Key for the backup user
The backup user will be limited to write only
az storage container generate-sas --account-name <storage-account> --expiry 2025-01-01 --name <container-name> --permissions w --account-key xyz..== "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=...."
-Copy data to container by using the backup (write premission) key
az storage blob upload-batch --destination <container-name> --pattern "hosts" --source "/etc" --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=....."
-List data - this is not working by purpose!!!
az storage blob list -c <container-name> --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=...."
-Delete data - this is not working by purpose!!!
az storage blob delete-batch --source <container-name> --pattern 'issue*' --account-name <storage-account> --sas-token "se=2025-01-01&sp=w&sv=2018-11-09&sr=c&sig=......"
ACI
- Create Azure container registry
az acr create --resource-group <rg-name> --name <acr-name> --sku Premium --admin-enabled true
- Docker init
az login az acr login -n <acr-name> docker pull docker/<image> docker tag docker/<image> <acr-name>.azurecr.io/<image>:latest docker push <acr-name>.azurecr.io/<image>:latest
- Create App Service Plan and App
az appservice plan create --name <azappsp-name> --resource-group <rg-name> --is-linux --sku F1 az webapp create --resource-group <rg-name> --plan <azappsp-name> --name <azapp-name> --deployment-container-image-name <acr-name>.azurecr.io/<image>:latest -s <acr-username> -w <acr-usr-psswd>
Extensions
Install Dependency Agent
az vm extension set --resource-group <rg-name> --vm-name <vm-name> \ --name DependencyAgentWindows --publisher Microsoft.Azure.Monitoring.DependencyAgent --version 9.5
Remove Extension
Remove-AzVMExtension -ResourceGroupName <rg-name> -Name "MicrosoftMonitoringAgent" -VMName <vm-name>
Set Extension for LogAnalytics
$PublicSettings = @{"workspaceId" = "..."} $ProtectedSettings = @{"workspaceKey" = "..."}
Set-AzVMExtension -ExtensionName "MicrosoftMonitoringAgent" ` -ResourceGroupName "rg-name" ` -VMName "vm-name" ` -Publisher "Microsoft.EnterpriseCloud.Monitoring" ` -ExtensionType "MicrosoftMonitoringAgent" ` -TypeHandlerVersion 1.0 ` -Settings $PublicSettings ` -ProtectedSettings $ProtectedSettings ` -Location WestEurope