ServerlessFramework에서 DynamoDB의 정렬 키를 변경해 보십시오
결론
Cloudformation을 실행하는 중 오류가 발생했습니다.
정렬 키를 변경하려면 잘 복원하십시오.
그래서 우리는 다음 순서에 따라 검증을 하고 복구를 시도했다.
해본 일
DynamoDB 테이블 만들기
serverless.yml의 Resources에서 다음 DynamoDB의 정보를 정의하고 디자인합니다.
serverless.yml
service: dynamo-test
frameworkVersion: '2'
provider:
  name: aws
  runtime: nodejs12.x
  lambdaHashingVersion: 20201221
  region: ap-northeast-1
resources:
 Resources:
   Test:
     Type: AWS::DynamoDB::Table
     Properties:
       TableName: Test
       AttributeDefinitions:
         - AttributeName: pKey
           AttributeType: S
         - AttributeName: sKey
           AttributeType: S
       StreamSpecification:
         StreamViewType: NEW_AND_OLD_IMAGES
       KeySchema:
        - AttributeName: pKey
          KeyType: HASH
        - AttributeName: sKey
          KeyType: RANGE
       BillingMode: PAY_PER_REQUEST
npx serverless deploy
sortKey의 변경 사항
sKey 이름을 sKey 2로 변경하고 실행
resources:
 Resources:
   Test:
     Type: AWS::DynamoDB::Table
     Properties:
       TableName: Test
       AttributeDefinitions:
         - AttributeName: pKey
           AttributeType: S
-         - AttributeName: sKey
+         - AttributeName: sKey2
           AttributeType: S
       StreamSpecification:
         StreamViewType: NEW_AND_OLD_IMAGES
       KeySchema:
        - AttributeName: pKey
          KeyType: HASH
-        - AttributeName: sKey
+        - AttributeName: sKey2
          KeyType: RANGE
       BillingMode: PAY_PER_REQUEST
npx serverless deploy
An error occurred: Test - CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename Test and update the stack again..
참고로 테이블Name을 수정하여 depro를 진행하면 원래의 표 (Test) 가 삭제됩니다.
resources:
 Resources:
   Test:
     Type: AWS::DynamoDB::Table
     Properties:
-       TableName: Test
+       TableName: Test2
       AttributeDefinitions:
         - AttributeName: pKey
           AttributeType: S
         - AttributeName: sKey
           AttributeType: S
       StreamSpecification:
         StreamViewType: NEW_AND_OLD_IMAGES
       KeySchema:
        - AttributeName: pKey
          KeyType: HASH
        - AttributeName: sKey
          KeyType: RANGE
       BillingMode: PAY_PER_REQUEST
되살리다
DynamoDB의 주요 부분을 변경하려면 다음 절차가 필요합니다.
백업
표 정의의 제작 - 데이터가 전송되기 전에 이런 처리로 완성할 수 있다고 생각합니다.
migrate.ts
import DynamoDB, { KeySchema } from 'aws-sdk/clients/dynamodb';
const AWS_INFO = {
  region: 'ap-northeast-1',
  accessKeyId: 'xxx',
  secretAccessKey: 'xxx',
};
const backupInfo = {
  tableName: 'Test',  // backup対象のテーブル名
  oldKeyName: 'sKey',  // 変換対象のkey
  keyName: 'sKey2',  // 変換後のkey
  KeyType: 'RANGE',  // keyのtype
  AttributeType: 'S',  // keyのAttribute
  converter: {
    sKey2: item => `${item.sKey}2`,
  },  // 変換処理。keyの名前でValue側の処理を実行して格納する
};
const client = new DynamoDB.DocumentClient(AWS_INFO);
const dynamoDb = new DynamoDB({
  apiVersion: '2012-08-10',
  region: AWS_INFO.region,
  credentials: {
    accessKeyId: AWS_INFO.accessKeyId,
    secretAccessKey: AWS_INFO.secretAccessKey,
  },
});
/**
 * backup前のテーブルを再帰的に全件取得する
 * 件数が多い場合はOOMになるので注意
 * @param tableName
 * @param pre
 * @param lastEvaluatedKey
 * @returns
 */
const listItems = async <T>(tableName: string, pre: T[], lastEvaluatedKey?: DynamoDB.DocumentClient.Key) => {
  console.log('list item');
  const items = await client
    .scan({
      TableName: tableName,
      ExclusiveStartKey: lastEvaluatedKey,
    })
    .promise();
  const result = [...pre, ...items.Items];
  if (items.LastEvaluatedKey) {
    return await listItems(tableName, result, items.LastEvaluatedKey);
  }
  return result;
};
/**
 * tableにデータを25件ずつ入れる
 * batchWriteの仕様で25件ずつしかいれられない
 * @param tableName
 * @param items
 */
const insertItems = async <T>(tableName: string, items: T[]) => {
  const batch25 = async (items: T[]) => {
    if (items.length === 0) {
      return;
    }
    await client
      .batchWrite({
        RequestItems: {
          [tableName]: items.slice(0, 25).map(item => ({ PutRequest: { Item: item } })),
        },
      })
      .promise();
    return await batch25(items.slice(25));
  };
  await batch25(items);
};
/**
 * backup時にconvertする
 * @param item
 * @returns
 */
const converter = (item: any) => ({
  ...item,
  ...Object.entries(backupInfo.converter).reduce(
    (pre, [key, value]) => ({
      ...pre,
      [key]: value(item),
    }),
    {},
  ),
});
/**
 * backup用のテーブルを作成する。名前は_bakで固定
 * @param tableInfo
 */
const createTable = async (tableInfo: DynamoDB.DescribeTableOutput) => {
  await dynamoDb
    .createTable({
      TableName: `${tableInfo.Table.TableName}_bak`,
      KeySchema: createKeySchema(tableInfo.Table.KeySchema),
      AttributeDefinitions: createAttributeDefinitions(tableInfo.Table.AttributeDefinitions),
      LocalSecondaryIndexes: tableInfo.Table.LocalSecondaryIndexes?.map(idx => ({
        IndexName: idx.IndexName,
        KeySchema: idx.KeySchema,
        Projection: idx.Projection,
      })),
      GlobalSecondaryIndexes: tableInfo.Table.GlobalSecondaryIndexes.map(idx => ({
        IndexName: idx.IndexName,
        KeySchema: idx.KeySchema,
        Projection: idx.Projection,
        // billingModeがPAY_PER_REQUESTの場合、0になるので入れるとエラーになる
        // ProvisionedThroughput: {
        //   ReadCapacityUnits: idx.ProvisionedThroughput.ReadCapacityUnits,
        //   WriteCapacityUnits: idx.ProvisionedThroughput.WriteCapacityUnits,
        // },
      })),
      BillingMode: tableInfo.Table.BillingModeSummary.BillingMode,
      // billingModeがPAY_PER_REQUESTの場合、0になるので入れるとエラーになる
      // ProvisionedThroughput: {
      //   ReadCapacityUnits: tableInfo.Table.ProvisionedThroughput.ReadCapacityUnits,
      //   WriteCapacityUnits: tableInfo.Table.ProvisionedThroughput.WriteCapacityUnits,
      // },
      StreamSpecification: tableInfo.Table.StreamSpecification,
      SSESpecification: tableInfo.Table.SSEDescription,
    })
    .promise();
};
const createKeySchema = (keySchema: KeySchema): KeySchema => {
  return [
    ...keySchema.filter(key => key.AttributeName !== backupInfo.oldKeyName),
    {
      AttributeName: backupInfo.keyName,
      KeyType: backupInfo.KeyType,
    },
  ];
};
const createAttributeDefinitions = (attributeDefinitions: DynamoDB.AttributeDefinitions) => {
  return [
    ...attributeDefinitions.filter(def => def.AttributeName !== backupInfo.oldKeyName),
    {
      AttributeName: backupInfo.keyName,
      AttributeType: backupInfo.AttributeType,
    },
  ];
};
const sleep = async (ms: number) => {
  return new Promise(resolve =>
    setTimeout(() => {
      resolve(null);
    }, ms),
  );
};
const migrate = async () => {
  const tableInfo = await dynamoDb
    .describeTable({
      TableName: backupInfo.tableName,
    })
    .promise();
  await createTable(tableInfo);
  // tableの作成が終わるまで待つ
  while (true) {
    console.log('wait ...');
    await sleep(5000);
    const tableInfo = await dynamoDb
      .describeTable({
        TableName: `${backupInfo.tableName}_bak`,
      })
      .promise();
    if (tableInfo.Table.TableStatus === 'ACTIVE') {
      break;
    }
  }
  const result = await listItems(backupInfo.tableName, []);
  await insertItems(
    `${backupInfo.tableName}_bak`,
    result.map(ret => converter(ret)),
  );
};
migrate();
npx ts-node migrate.ts
resources:
 Resources:
-   Test:
-     Type: AWS::DynamoDB::Table
-     Properties:
-       TableName: Test
-       AttributeDefinitions:
-         - AttributeName: pKey
-           AttributeType: S
-         - AttributeName: sKey
-           AttributeType: S
-       StreamSpecification:
-         StreamViewType: NEW_AND_OLD_IMAGES
-       KeySchema:
-        - AttributeName: pKey
-          KeyType: HASH
-        - AttributeName: sKey
-          KeyType: RANGE
-       BillingMode: PAY_PER_REQUEST
npx serverless deploy
resources:
 Resources:
   Test:
     Type: AWS::DynamoDB::Table
     Properties:
       TableName: Test
       AttributeDefinitions:
         - AttributeName: pKey
           AttributeType: S
-         - AttributeName: sKey
+         - AttributeName: sKey2
           AttributeType: S
       StreamSpecification:
         StreamViewType: NEW_AND_OLD_IMAGES
       KeySchema:
        - AttributeName: pKey
          KeyType: HASH
        - AttributeName: sKey
          KeyType: RANGE
       BillingMode: PAY_PER_REQUEST
migrate.ts(↑의 일부 변경판)
npx serverless deploy
import DynamoDB, { KeySchema } from 'aws-sdk/clients/dynamodb';
const AWS_INFO = {
  region: 'ap-northeast-1',
  accessKeyId: 'xxx',
  secretAccessKey: 'xxx',
};
const backupInfo = {
  tableName: 'Test',  // backup対象のテーブル名
};
const client = new DynamoDB.DocumentClient(AWS_INFO);
/**
 * backup前のテーブルを再帰的に全件取得する
 * 件数が多い場合はOOMになるので注意
 * @param tableName
 * @param pre
 * @param lastEvaluatedKey
 * @returns
 */
const listItems = async <T>(tableName: string, pre: T[], lastEvaluatedKey?: DynamoDB.DocumentClient.Key) => {
  console.log('list item');
  const items = await client
    .scan({
      TableName: tableName,
      ExclusiveStartKey: lastEvaluatedKey,
    })
    .promise();
  const result = [...pre, ...items.Items];
  if (items.LastEvaluatedKey) {
    return await listItems(tableName, result, items.LastEvaluatedKey);
  }
  return result;
};
/**
 * tableにデータを25件ずつ入れる
 * batchWriteの仕様で25件ずつしかいれられない
 * @param tableName
 * @param items
 */
const insertItems = async <T>(tableName: string, items: T[]) => {
  const batch25 = async (items: T[]) => {
    if (items.length === 0) {
      return;
    }
    await client
      .batchWrite({
        RequestItems: {
          [tableName]: items.slice(0, 25).map(item => ({ PutRequest: { Item: item } })),
        },
      })
      .promise();
    return await batch25(items.slice(25));
  };
  await batch25(items);
};
const migrate = async () => {
  const result = await listItems(backupInfo.tableName, []);
  await insertItems(
    `${backupInfo.tableName}_bak`,
    result.map(ret => converter(ret)),
  );
};
migrate()
종결어.
쉬울 것 같아. 귀찮아.
partitionKey와sortKey의 정보는 변하지 않는 값입니다. 변할 수 있는 조회는 GSI를 사용하는 것이 좋습니다.
Reference
이 문제에 관하여(ServerlessFramework에서 DynamoDB의 정렬 키를 변경해 보십시오), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://zenn.dev/merutin/articles/1b00a44aa57c5f텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
                                
                                
                                
                                
                                
                                우수한 개발자 콘텐츠 발견에 전념
                                (Collection and Share based on the CC Protocol.)