Skip to content

Commit a5e7e04

Browse files
aws-sdk-cpp-automationamit-schreiber-firebolt
aws-sdk-cpp-automation
authored andcommitted
New feature: Updated EC2 API to support faster launching for Windows images. Optimized images are pre-provisioned, using snapshots to launch instances up to 65% faster.
Documentation updates for Amazon Transcribe. This SDK release adds support for specifying a Bucket Owner for an S3 location. This release adds FailureType in the response of DescribeAnomalyDetector. Adds support for new Compute Optimizer capability that makes it easier for customers to optimize their EC2 instances by leveraging multiple CPU architectures.
1 parent f335c05 commit a5e7e04

File tree

98 files changed

+6724
-645
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

98 files changed

+6724
-645
lines changed

aws-cpp-sdk-compute-optimizer/include/aws/compute-optimizer/model/AutoScalingGroupRecommendation.h

Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
#include <aws/compute-optimizer/model/EffectiveRecommendationPreferences.h>
1515
#include <aws/compute-optimizer/model/UtilizationMetric.h>
1616
#include <aws/compute-optimizer/model/AutoScalingGroupRecommendationOption.h>
17+
#include <aws/compute-optimizer/model/InferredWorkloadType.h>
1718
#include <utility>
1819

1920
namespace Aws
@@ -529,6 +530,143 @@ namespace Model
529530
*/
530531
inline AutoScalingGroupRecommendation& WithEffectiveRecommendationPreferences(EffectiveRecommendationPreferences&& value) { SetEffectiveRecommendationPreferences(std::move(value)); return *this;}
531532

533+
534+
/**
535+
* <p>The applications that might be running on the instances in the Auto Scaling
536+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
537+
* one of the following applications might be running on the instances:</p> <ul>
538+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
539+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
540+
* Cassandra might be running on the instances.</p> </li> <li> <p>
541+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
542+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
543+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
544+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
545+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
546+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
547+
* running on the instances.</p> </li> </ul>
548+
*/
549+
inline const Aws::Vector<InferredWorkloadType>& GetInferredWorkloadTypes() const{ return m_inferredWorkloadTypes; }
550+
551+
/**
552+
* <p>The applications that might be running on the instances in the Auto Scaling
553+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
554+
* one of the following applications might be running on the instances:</p> <ul>
555+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
556+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
557+
* Cassandra might be running on the instances.</p> </li> <li> <p>
558+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
559+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
560+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
561+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
562+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
563+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
564+
* running on the instances.</p> </li> </ul>
565+
*/
566+
inline bool InferredWorkloadTypesHasBeenSet() const { return m_inferredWorkloadTypesHasBeenSet; }
567+
568+
/**
569+
* <p>The applications that might be running on the instances in the Auto Scaling
570+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
571+
* one of the following applications might be running on the instances:</p> <ul>
572+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
573+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
574+
* Cassandra might be running on the instances.</p> </li> <li> <p>
575+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
576+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
577+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
578+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
579+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
580+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
581+
* running on the instances.</p> </li> </ul>
582+
*/
583+
inline void SetInferredWorkloadTypes(const Aws::Vector<InferredWorkloadType>& value) { m_inferredWorkloadTypesHasBeenSet = true; m_inferredWorkloadTypes = value; }
584+
585+
/**
586+
* <p>The applications that might be running on the instances in the Auto Scaling
587+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
588+
* one of the following applications might be running on the instances:</p> <ul>
589+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
590+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
591+
* Cassandra might be running on the instances.</p> </li> <li> <p>
592+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
593+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
594+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
595+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
596+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
597+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
598+
* running on the instances.</p> </li> </ul>
599+
*/
600+
inline void SetInferredWorkloadTypes(Aws::Vector<InferredWorkloadType>&& value) { m_inferredWorkloadTypesHasBeenSet = true; m_inferredWorkloadTypes = std::move(value); }
601+
602+
/**
603+
* <p>The applications that might be running on the instances in the Auto Scaling
604+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
605+
* one of the following applications might be running on the instances:</p> <ul>
606+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
607+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
608+
* Cassandra might be running on the instances.</p> </li> <li> <p>
609+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
610+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
611+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
612+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
613+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
614+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
615+
* running on the instances.</p> </li> </ul>
616+
*/
617+
inline AutoScalingGroupRecommendation& WithInferredWorkloadTypes(const Aws::Vector<InferredWorkloadType>& value) { SetInferredWorkloadTypes(value); return *this;}
618+
619+
/**
620+
* <p>The applications that might be running on the instances in the Auto Scaling
621+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
622+
* one of the following applications might be running on the instances:</p> <ul>
623+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
624+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
625+
* Cassandra might be running on the instances.</p> </li> <li> <p>
626+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
627+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
628+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
629+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
630+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
631+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
632+
* running on the instances.</p> </li> </ul>
633+
*/
634+
inline AutoScalingGroupRecommendation& WithInferredWorkloadTypes(Aws::Vector<InferredWorkloadType>&& value) { SetInferredWorkloadTypes(std::move(value)); return *this;}
635+
636+
/**
637+
* <p>The applications that might be running on the instances in the Auto Scaling
638+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
639+
* one of the following applications might be running on the instances:</p> <ul>
640+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
641+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
642+
* Cassandra might be running on the instances.</p> </li> <li> <p>
643+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
644+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
645+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
646+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
647+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
648+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
649+
* running on the instances.</p> </li> </ul>
650+
*/
651+
inline AutoScalingGroupRecommendation& AddInferredWorkloadTypes(const InferredWorkloadType& value) { m_inferredWorkloadTypesHasBeenSet = true; m_inferredWorkloadTypes.push_back(value); return *this; }
652+
653+
/**
654+
* <p>The applications that might be running on the instances in the Auto Scaling
655+
* group as inferred by Compute Optimizer.</p> <p>Compute Optimizer can infer if
656+
* one of the following applications might be running on the instances:</p> <ul>
657+
* <li> <p> <code>AmazonEmr</code> - Infers that Amazon EMR might be running on the
658+
* instances.</p> </li> <li> <p> <code>ApacheCassandra</code> - Infers that Apache
659+
* Cassandra might be running on the instances.</p> </li> <li> <p>
660+
* <code>ApacheHadoop</code> - Infers that Apache Hadoop might be running on the
661+
* instances.</p> </li> <li> <p> <code>Memcached</code> - Infers that Memcached
662+
* might be running on the instances.</p> </li> <li> <p> <code>NGINX</code> -
663+
* Infers that NGINX might be running on the instances.</p> </li> <li> <p>
664+
* <code>PostgreSql</code> - Infers that PostgreSQL might be running on the
665+
* instances.</p> </li> <li> <p> <code>Redis</code> - Infers that Redis might be
666+
* running on the instances.</p> </li> </ul>
667+
*/
668+
inline AutoScalingGroupRecommendation& AddInferredWorkloadTypes(InferredWorkloadType&& value) { m_inferredWorkloadTypesHasBeenSet = true; m_inferredWorkloadTypes.push_back(std::move(value)); return *this; }
669+
532670
private:
533671

534672
Aws::String m_accountId;
@@ -563,6 +701,9 @@ namespace Model
563701

564702
EffectiveRecommendationPreferences m_effectiveRecommendationPreferences;
565703
bool m_effectiveRecommendationPreferencesHasBeenSet;
704+
705+
Aws::Vector<InferredWorkloadType> m_inferredWorkloadTypes;
706+
bool m_inferredWorkloadTypesHasBeenSet;
566707
};
567708

568709
} // namespace Model

aws-cpp-sdk-compute-optimizer/include/aws/compute-optimizer/model/AutoScalingGroupRecommendationOption.h

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
#include <aws/compute-optimizer/model/AutoScalingGroupConfiguration.h>
99
#include <aws/core/utils/memory/stl/AWSVector.h>
1010
#include <aws/compute-optimizer/model/SavingsOpportunity.h>
11+
#include <aws/compute-optimizer/model/MigrationEffort.h>
1112
#include <aws/compute-optimizer/model/UtilizationMetric.h>
1213
#include <utility>
1314

@@ -305,6 +306,79 @@ namespace Model
305306
*/
306307
inline AutoScalingGroupRecommendationOption& WithSavingsOpportunity(SavingsOpportunity&& value) { SetSavingsOpportunity(std::move(value)); return *this;}
307308

309+
310+
/**
311+
* <p>The level of effort required to migrate from the current instance type to the
312+
* recommended instance type.</p> <p>For example, the migration effort is
313+
* <code>Low</code> if Amazon EMR is the inferred workload type and an Amazon Web
314+
* Services Graviton instance type is recommended. The migration effort is
315+
* <code>Medium</code> if a workload type couldn't be inferred but an Amazon Web
316+
* Services Graviton instance type is recommended. The migration effort is
317+
* <code>VeryLow</code> if both the current and recommended instance types are of
318+
* the same CPU architecture.</p>
319+
*/
320+
inline const MigrationEffort& GetMigrationEffort() const{ return m_migrationEffort; }
321+
322+
/**
323+
* <p>The level of effort required to migrate from the current instance type to the
324+
* recommended instance type.</p> <p>For example, the migration effort is
325+
* <code>Low</code> if Amazon EMR is the inferred workload type and an Amazon Web
326+
* Services Graviton instance type is recommended. The migration effort is
327+
* <code>Medium</code> if a workload type couldn't be inferred but an Amazon Web
328+
* Services Graviton instance type is recommended. The migration effort is
329+
* <code>VeryLow</code> if both the current and recommended instance types are of
330+
* the same CPU architecture.</p>
331+
*/
332+
inline bool MigrationEffortHasBeenSet() const { return m_migrationEffortHasBeenSet; }
333+
334+
/**
335+
* <p>The level of effort required to migrate from the current instance type to the
336+
* recommended instance type.</p> <p>For example, the migration effort is
337+
* <code>Low</code> if Amazon EMR is the inferred workload type and an Amazon Web
338+
* Services Graviton instance type is recommended. The migration effort is
339+
* <code>Medium</code> if a workload type couldn't be inferred but an Amazon Web
340+
* Services Graviton instance type is recommended. The migration effort is
341+
* <code>VeryLow</code> if both the current and recommended instance types are of
342+
* the same CPU architecture.</p>
343+
*/
344+
inline void SetMigrationEffort(const MigrationEffort& value) { m_migrationEffortHasBeenSet = true; m_migrationEffort = value; }
345+
346+
/**
347+
* <p>The level of effort required to migrate from the current instance type to the
348+
* recommended instance type.</p> <p>For example, the migration effort is
349+
* <code>Low</code> if Amazon EMR is the inferred workload type and an Amazon Web
350+
* Services Graviton instance type is recommended. The migration effort is
351+
* <code>Medium</code> if a workload type couldn't be inferred but an Amazon Web
352+
* Services Graviton instance type is recommended. The migration effort is
353+
* <code>VeryLow</code> if both the current and recommended instance types are of
354+
* the same CPU architecture.</p>
355+
*/
356+
inline void SetMigrationEffort(MigrationEffort&& value) { m_migrationEffortHasBeenSet = true; m_migrationEffort = std::move(value); }
357+
358+
/**
359+
* <p>The level of effort required to migrate from the current instance type to the
360+
* recommended instance type.</p> <p>For example, the migration effort is
361+
* <code>Low</code> if Amazon EMR is the inferred workload type and an Amazon Web
362+
* Services Graviton instance type is recommended. The migration effort is
363+
* <code>Medium</code> if a workload type couldn't be inferred but an Amazon Web
364+
* Services Graviton instance type is recommended. The migration effort is
365+
* <code>VeryLow</code> if both the current and recommended instance types are of
366+
* the same CPU architecture.</p>
367+
*/
368+
inline AutoScalingGroupRecommendationOption& WithMigrationEffort(const MigrationEffort& value) { SetMigrationEffort(value); return *this;}
369+
370+
/**
371+
* <p>The level of effort required to migrate from the current instance type to the
372+
* recommended instance type.</p> <p>For example, the migration effort is
373+
* <code>Low</code> if Amazon EMR is the inferred workload type and an Amazon Web
374+
* Services Graviton instance type is recommended. The migration effort is
375+
* <code>Medium</code> if a workload type couldn't be inferred but an Amazon Web
376+
* Services Graviton instance type is recommended. The migration effort is
377+
* <code>VeryLow</code> if both the current and recommended instance types are of
378+
* the same CPU architecture.</p>
379+
*/
380+
inline AutoScalingGroupRecommendationOption& WithMigrationEffort(MigrationEffort&& value) { SetMigrationEffort(std::move(value)); return *this;}
381+
308382
private:
309383

310384
AutoScalingGroupConfiguration m_configuration;
@@ -321,6 +395,9 @@ namespace Model
321395

322396
SavingsOpportunity m_savingsOpportunity;
323397
bool m_savingsOpportunityHasBeenSet;
398+
399+
MigrationEffort m_migrationEffort;
400+
bool m_migrationEffortHasBeenSet;
324401
};
325402

326403
} // namespace Model

0 commit comments

Comments
 (0)