라즈베리파이 클라우드 백업 2부
73766 단어 terraformazurebackupraspberrypi
건축물
에서 우리는 다양한 클라우드 공급자의 비용을 비교하고 데이터를 저장하기 위해 Azure를 선택했습니다.
이제 구현의 아키텍처 도면을 분석해 보겠습니다.
먼저 사용할 구성 요소를 결정해야 합니다.
VPN과 같은 추가 인프라를 설정하고 싶지 않기 때문에 통신은 https를 사용하여 인터넷을 통해 이루어집니다.
하부 구조
인프라를 구축합시다. 30일 동안 200달러를 포함하는 pay-as-you-go subscription이 필요하며 1년 동안 무료로 제공되는 다양한 서비스가 있습니다. 그리고 10GB를 무료로 제공하는 아카이브 스토리지가 이러한 서비스 중 하나인지 추측해 보세요.
Azure 구성 요소를 설정하기 위해 terraform 을 사용합니다. Terraform은 원하는 인프라를 지정하는 데 사용할 수 있는 선언적 구성 파일로 클라우드 API를 코드화합니다. 또한 여러 클라우드 공급자와 함께 작동하므로 여러 언어 대신 하나의 언어만 배우면 됩니다. 멋지지 않나요?
요구 사항을 설치해 보겠습니다.
Az Cli는 새 계정에 로그인하고 terraform을 승인하는 데 사용됩니다.
Windows 운영 체제의 경우 관리자 권한으로 powershell 창을 열고 다음 명령을 붙여넣습니다.
winget install --id Microsoft.AzureCLI
데비안 기반 Linux 배포판의 경우:
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
이제 테라폼을 설치해보자.
초콜릿을 통해 창에서:
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
choco install terraform
리눅스의 경우:
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt-get install terraform
이제 내github에서 main.tf 및 example.tfvars라는 2개의 파일을 다운로드할 수 있습니다.
이 두 파일을 빈 디렉토리에 넣고 쉘의 현재 디렉토리를 이 새 디렉토리로 변경하십시오.
다음 명령을 사용하여 Azure에 로그인합니다.
az login
출력은 다음과 같아야 합니다.
[
{
"cloudName": "AzureCloud",
"homeTenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx",
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"isDefault": true,
"managedByTenants": [],
"name": "Pay-As-You-Go",
"state": "Enabled",
"tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx",
"user": {
"name": "[email protected]",
"type": "user"
}
}
]
이 출력에서 id 값을 복사하고 example.tfvars 파일에서 subscription_id 값을 바꿉니다.
출력에서 tenantId 값도 복사하고 example.tfvars 파일에서 tenand_id 값을 바꿉니다.
이제 서비스 원칙, 리소스 그룹 및 네트워크 제한 스토리지 계정을 가동할 준비가 되었습니다.
테라폼을 실행해봅시다:
terraform init
terraform plan -var-file="example.tfvars"
다음 출력을 반환해야 합니다.
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# azuread_application.appregistration will be created
+ resource "azuread_application" "appregistration" {
+ app_role_ids = (known after apply)
+ application_id = (known after apply)
+ disabled_by_microsoft = (known after apply)
+ display_name = "backupapplication"
+ id = (known after apply)
+ logo_url = (known after apply)
+ oauth2_permission_scope_ids = (known after apply)
+ object_id = (known after apply)
+ owners = [
+ "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx",
]
+ prevent_duplicate_names = false
+ publisher_domain = (known after apply)
+ sign_in_audience = "AzureADMyOrg"
+ tags = (known after apply)
+ template_id = (known after apply)
+ feature_tags {
+ custom_single_sign_on = (known after apply)
+ enterprise = (known after apply)
+ gallery = (known after apply)
+ hide = (known after apply)
}
}
# azuread_application_password.appregistrationPassword will be created
+ resource "azuread_application_password" "appregistrationPassword" {
+ application_object_id = (known after apply)
+ display_name = (known after apply)
+ end_date = (known after apply)
+ end_date_relative = "8765h48m"
+ id = (known after apply)
+ key_id = (known after apply)
+ start_date = (known after apply)
+ value = (sensitive value)
}
# azuread_service_principal.backupserviceprinciple will be created
+ resource "azuread_service_principal" "backupserviceprinciple" {
+ account_enabled = true
+ app_role_assignment_required = false
+ app_role_ids = (known after apply)
+ app_roles = (known after apply)
+ application_id = (known after apply)
+ application_tenant_id = (known after apply)
+ display_name = (known after apply)
+ homepage_url = (known after apply)
+ id = (known after apply)
+ logout_url = (known after apply)
+ oauth2_permission_scope_ids = (known after apply)
+ oauth2_permission_scopes = (known after apply)
+ object_id = (known after apply)
+ owners = [
+ "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx",
]
+ redirect_uris = (known after apply)
+ saml_metadata_url = (known after apply)
+ service_principal_names = (known after apply)
+ sign_in_audience = (known after apply)
+ tags = (known after apply)
+ type = (known after apply)
+ feature_tags {
+ custom_single_sign_on = (known after apply)
+ enterprise = (known after apply)
+ gallery = (known after apply)
+ hide = (known after apply)
}
+ features {
+ custom_single_sign_on_app = (known after apply)
+ enterprise_application = (known after apply)
+ gallery_application = (known after apply)
+ visible_to_users = (known after apply)
}
}
# azurerm_resource_group.resourcegroup will be created
+ resource "azurerm_resource_group" "resourcegroup" {
+ id = (known after apply)
+ location = "westeurope"
+ name = "homeresourcegroup"
+ tags = {
+ "environment" = "homesetup"
}
}
# azurerm_role_assignment.StorageBlobDataOwner will be created
+ resource "azurerm_role_assignment" "StorageBlobDataOwner" {
+ id = (known after apply)
+ name = (known after apply)
+ principal_id = (known after apply)
+ principal_type = (known after apply)
+ role_definition_id = (known after apply)
+ role_definition_name = "Storage Blob Data Owner"
+ scope = (known after apply)
+ skip_service_principal_aad_check = (known after apply)
}
# azurerm_storage_account.storage will be created
+ resource "azurerm_storage_account" "storage" {
+ access_tier = (known after apply)
+ account_kind = "StorageV2"
+ account_replication_type = "LRS"
+ account_tier = "Standard"
+ allow_nested_items_to_be_public = true
+ cross_tenant_replication_enabled = true
+ default_to_oauth_authentication = false
+ enable_https_traffic_only = true
+ id = (known after apply)
+ infrastructure_encryption_enabled = false
+ is_hns_enabled = false
+ large_file_share_enabled = (known after apply)
+ location = "westeurope"
+ min_tls_version = "TLS1_2"
+ name = "storageraspberrybackup"
+ nfsv3_enabled = false
+ primary_access_key = (sensitive value)
+ primary_blob_connection_string = (sensitive value)
+ primary_blob_endpoint = (known after apply)
+ primary_blob_host = (known after apply)
+ primary_connection_string = (sensitive value)
+ primary_dfs_endpoint = (known after apply)
+ primary_dfs_host = (known after apply)
+ primary_file_endpoint = (known after apply)
+ primary_file_host = (known after apply)
+ primary_location = (known after apply)
+ primary_queue_endpoint = (known after apply)
+ primary_queue_host = (known after apply)
+ primary_table_endpoint = (known after apply)
+ primary_table_host = (known after apply)
+ primary_web_endpoint = (known after apply)
+ primary_web_host = (known after apply)
+ queue_encryption_key_type = "Service"
+ resource_group_name = "homeresourcegroup"
+ secondary_access_key = (sensitive value)
+ secondary_blob_connection_string = (sensitive value)
+ secondary_blob_endpoint = (known after apply)
+ secondary_blob_host = (known after apply)
+ secondary_connection_string = (sensitive value)
+ secondary_dfs_endpoint = (known after apply)
+ secondary_dfs_host = (known after apply)
+ secondary_file_endpoint = (known after apply)
+ secondary_file_host = (known after apply)
+ secondary_location = (known after apply)
+ secondary_queue_endpoint = (known after apply)
+ secondary_queue_host = (known after apply)
+ secondary_table_endpoint = (known after apply)
+ secondary_table_host = (known after apply)
+ secondary_web_endpoint = (known after apply)
+ secondary_web_host = (known after apply)
+ shared_access_key_enabled = true
+ table_encryption_key_type = "Service"
+ tags = {
+ "environment" = "homesetup"
}
+ blob_properties {
+ change_feed_enabled = (known after apply)
+ change_feed_retention_in_days = (known after apply)
+ default_service_version = (known after apply)
+ last_access_time_enabled = (known after apply)
+ versioning_enabled = (known after apply)
+ container_delete_retention_policy {
+ days = (known after apply)
}
+ cors_rule {
+ allowed_headers = (known after apply)
+ allowed_methods = (known after apply)
+ allowed_origins = (known after apply)
+ exposed_headers = (known after apply)
+ max_age_in_seconds = (known after apply)
}
+ delete_retention_policy {
+ days = (known after apply)
}
}
+ network_rules {
+ bypass = (known after apply)
+ default_action = "Deny"
+ ip_rules = (known after apply)
+ virtual_network_subnet_ids = (known after apply)
}
+ queue_properties {
+ cors_rule {
+ allowed_headers = (known after apply)
+ allowed_methods = (known after apply)
+ allowed_origins = (known after apply)
+ exposed_headers = (known after apply)
+ max_age_in_seconds = (known after apply)
}
+ hour_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
+ logging {
+ delete = (known after apply)
+ read = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
+ write = (known after apply)
}
+ minute_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
}
+ routing {
+ choice = (known after apply)
+ publish_internet_endpoints = (known after apply)
+ publish_microsoft_endpoints = (known after apply)
}
+ share_properties {
+ cors_rule {
+ allowed_headers = (known after apply)
+ allowed_methods = (known after apply)
+ allowed_origins = (known after apply)
+ exposed_headers = (known after apply)
+ max_age_in_seconds = (known after apply)
}
+ retention_policy {
+ days = (known after apply)
}
+ smb {
+ authentication_types = (known after apply)
+ channel_encryption_type = (known after apply)
+ kerberos_ticket_encryption_type = (known after apply)
+ versions = (known after apply)
}
}
}
Plan: 6 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ appId = (known after apply)
+ displayName = "backupapplication"
+ password = (known after apply)
+ tenant = "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx"
모든 것이 좋아보이므로 배포를 진행하겠습니다.
terraform apply -var-file="example.tfvars"
appid, displayname, tenant 및 password와 같은 출력의 모든 값을 기록해 둡니다(암호 출력은 예외이며 프로덕션 또는 모든 파이프라인에서 사용해서는 안 됩니다. 필요한 경우 Azure Keyvault를 대신 사용하세요).
그리고 우리는 인프라를 완성했습니다. 다음 단계는 인증을 위해 terraform의 출력을 사용하여 Azure에서 파일을 백업할 bash 스크립트를 빌드하는 것입니다.
그러니 3부를 놓치지 마세요.
Reference
이 문제에 관하여(라즈베리파이 클라우드 백업 2부), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://dev.to/mrscripting/raspberrypi-cloud-backup-part-2-34ki텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)