AWS Sinav Sorusu

Selam arkadaslar, asagidaki 14 sorunun cevaplarindan emin olamadim, yardimci olabilirseniz sevinirim;

A legacy application is being migrated into AWS. The application has a large amount of data that is rarely accessed. When files are accessed they are retrieved sequentially. The application will be migrated onto an Amazon EC2 instance.

What is the LEAST expensive EBS volume type for this use case?

A-Throughput Optimized HOO (st1)
B-Provisioned IOPS SSD (io1)
C-Cold HOD (sc1)
D-General Purpose SSD (gp2)

------------------------------------------------------------

A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by midmorning

How should the scaling be changed to address the staff complaints and keep costs to a minimum)

A-Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
B-Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens
C-Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period
D-Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
-----------------------------------------------------------

A data processing application runs on an i3.large EC2 instance with a single 100 GB EBS gp2 volume. The application stores temporary data in a small database (less than 30 GB) located on the EBS root volume. The application is struggling to process the data fast enough, and a Solutions Architect has determined that the 1/0 speed of the temporary database is the bottleneck.

What is the MOST cost-eff cient way to improve the database response times?

A-Enable EBS optimization on the instance and keep the temporary files on the existing volume B-Move the temporary database onto instance storage
C-Put the temporary database on a new 50-GB EBS gp2 volume
D-Put the temporary database on a new 50-GB EBS iol volume with a 3000 IOPS allocation

---------------------------------------------------------------

You are designing a solution on AWS that requires a file storage layer that can be shared between multiple EC2 instances. The storage should be highly-available and should scale easily.

Which AWS service can be used for this design?

A-Amazon EBS
B-Amazon EC2 instance store
C-Amazon S3
D-Amazon EFS

-------------------------------------------------------------------

An application is deployed on multiple AWS regions and accessed from around the world. The application exposes static public IP addresses. Some users are experiencing poor performance when accessing the application over the Internet.

What should a solutions architect recommend to reduce internet latency?

A-Set up an Amazon Route 53 geoproximity routing policy to route traffic
B-Set up an Amazon CloudFront distribution to access an application
C-Set up AWS Direct Connect locations in multiple Regions
D-Set up AWS Global Accelerator and add endpoints

----------------------------------------------------------------------

A new application will be launched on an Amazon EC2 instance with an Elastic Block Store (EBS) volume. A solutions architect needs to determine the most cost-effective storage option. The application will have infrequent usage, with peaks of traffic for a couple of hours in the morning and evening. Disk 1/0 is variable with peaks of up to 3,000 IOPS.

Which solution should the solutions architect recommend?

A-Amazon EBS Provisioned IOPS SSD (io1)

B-Amazon EBS Throughput Optimized HOD (stl)

C-Amazon EBS Cold HOO (sc 1)

D-Amazon EBS General Purpose SSD (gp2)

-----------------------------------------------------------------------

A company is planning to use Amazon S3 to store documents uploaded by its customers. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys.

What should a solutions architect use to accomplish this?

A-Server-Side Encryption with AWS KMS-Managed Keys {SSE-KMS)
B-Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
C-Server-Side Encryption with keys stored in an S3 bucket
D-Server-Side Encryption with Customer-Provided Keys {SSE-C)

---------------------------------------------------------------------------

A web application in a three-tier architecture runs on a fleet of Amazon EC2 instances. Performance issues have been reported and investigations point to insufficient swap space. The operations team requires monitoring to determine if this is correct. What should a solutions architect recommend?

A-Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch
B-Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch
C-Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch
D-Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization I metrics in CloudWatch

-----------------------------------------------------------------------------

A company has deployed an API in a VPC behind an internal Network Load Balancer (NLB). An application that consumes the API as a client is deployed in a second account in private subnets.

Which architectural configurations will allow the API to be consumed without using the public Internet? (Select TWO.)

A-Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address
B-Configure a Privatelink connection for the API into the client VPC. Access the API using the Privatelink address
C-Configure a VPC peering connection between the two VPCs. Access the API using the private address
D-Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address
E-Configure a Classiclink connection for the API into the client VPC. Access the API using the Classiclink address

-------------------------------------------------------------------------

A training provider hosts its website in Amazon VPC which consists of web servers behind application load Balancer in addition to Amazon DynamoDB which is not accessible from the internet.

What is the optimal architecture to ensure high availability and security?

A-Two private subnets for the web servers, and Two private subnets for DyanmoDB in each Availability Zone in addition to One shared public subnets for the elastic load balancer,
B-Two public subnets for the elastic load balancer, Two private subnets for the web servers. and Two private subnets for DyanmoDB in each Availability Zone.
C-One public subnet for the elastic load balancer, One private subnet for the web servers, and One private subnet for DyanmoDB in each Availability Zone.
D-One public subnet for the elastic load balancer, One public subnet for the web servers, and One private subnet for DyanrnoDB in each Availability Zone.

---------------------------------------------------------------------------

A transportation company is developing a multi-tier architecture to track the location of its cars during peak operating hours to be used in analytics purposes. A solutions architect must determine the most viable multi-tier option to support this architecture. The data points must be accessible from the REST API.

Which action meets these requirements for storing and retrieving location data7

A-Use Amazon API Gateway with Amazon Kinesis Data Analytics.
B-Use Amazon API Gateway with AWS Lambda.
C-Use Amazon Athena with Amazon 53.
D-Use Amazon QuickSight with Amazon Redshift.

-----------------------------------------------------------------------------

You works as solutions architect in a multinational bank, you are designing a web application that will run on Amazon EC2 instances behind an Application Load Balancer (ALB). The security team required that the application be resilient against malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.

What should the solutions architect recommend?

A-Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.
B-Leverage Amazon CloudFront with the ALB endpoint as the origin.
C-Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.
D-Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.

-------------------------------------------------------------------------------

A call center application consists of a three-tier application using Auto Scaling groups to automatically scale resources as needed. The Auto Scaling group scales up to 20 instances during work hours. but scales down to 5 instances overnight. Staff are complaining that the application is very slow when the day begins. although it runs well by mid-morning.

How should the scaling be changed to address the staff complaints and keep costs to a minimum?

A-Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
B-Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.
C-Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.
D-lmplement a scheduled action that sets the desired capacity to 20 shortly before the office opens.

---------------------------------------------------------------------------

A customer owns a simple API in a VPC behind an internet-facing Application Load Balancer (ALB). a client application which consumes the API is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.

Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)

A-Configure a VPC peering connection between the two VPCs. Access the API using the private address.
B-Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address.
C-Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address.
D-Configure a Privatelink connection for the API into the client VPC. Access the API using the Privatelink address.
E-Configure a Classiclink connection for the API into the client VPC. Access the API using the Classiclink address.

C-Cold HOD (sc1) rarely access least expensive istiyor bu sağlar
D-Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens. min max demene gerek yok desired istenen state bu. insanlar işe gelmeden 20 makine olsun. sonra düşer
B-Move the temporary database onto instance storage. instance store hızlıdır ve ucuzdur ve i/o lazımsa bunu kullanırsın. fakat makine kapanırsa içindeki veriler silinir, ama temp database diyor. silinse de önemli değil
D-Amazon EFS. scales easily
A-Set up an Amazon Route 53 geoproximity routing policy to route traffic. Latency gecikme azalt diyor. Bunla sağlarsın.
D-Amazon EBS General Purpose SSD (gp2). cost effective bu olur
A-Server-Side Encryption with AWS KMS-Managed Keys {SSE-KMS). keyin kimin tarafından kullanıldığı görebilme imkanı verir.
A-Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch. ek bir işleme gerek yok

B-Configure a Privatelink connection for the API into the client VPC. Access the API using the Privatelink address C-Configure a VPC peering connection between the two VPCs. Access the API using the private address

C-One public subnet for the elastic load balancer, One private subnet for the web servers, and One private subnet for DyanmoDB in each Availability Zone.

B-Use Amazon API Gateway with AWS Lambda.

D-Deploy an appropriate managed rule for AWS WAF and associate it with the ALB. bunu waf sağlar.

D-lmplement a scheduled action that sets the desired capacity to 20 shortly before the office opens.

A-Configure a VPC peering connection between the two VPCs. Access the API using the private address. D-Configure a Privatelink connection for the API into the client VPC. Access the API using the Privatelink address.

Hocam asagidaki cevaplari neden sectiginizi tam anlayamadim:

5.soru:
A-Set up an Amazon Route 53 geoproximity routing policy to route traffic. Latency gecikme azalt diyor. Bunla sağlarsın.

Burada neden Set up an Amazon CloudFront distribution to access an application olmaz? Su sebepten olabilir mi, Cloudfront ile HTTP ve HTTPS protokolu kullanan uygulmalarda kullanilir. Sorudaki uygulama server-client seklinde direkt calisan bir uygulama var.

6.Soru:

C-One public subnet for the elastic load balancer, One private subnet for the web servers, and One private subnet for DyanmoDB in each Availability Zone.

Burada bunu sectiniz cunku 1 subnet yeterli bu websitesi icin, cunku, optimal altyapi bunu gerektirir. Bazi arkadaslar. subnet bi an tek bir ip adresi olarak dusundukleri icin B sikkini secmicler diye dusunuyorum. Ve bir diger tarafida (yukarda yazdigim subneti tek ip adresi olarak dusunuyorlar dan haric bu) Soruda optimal kelimesi gecmeseydi, B sikki olan iki public iki private, iki private senaryosuda olabilirdi, tabi bunun olmasi hem maliyet hemde operasyonel olarak zahmetli olur. Dogru mu anladim hocam?

13.Soru:
D-lmplement a scheduled action that sets the desired capacity to 20 shortly before the office opens.

Burada bunu sectiniz cunku, fakat bu durumda cost yukari cikmazmi 20 tane birden calisacak. bunun yerine Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period bunu secsek olmuyor mu?

cloudfront sadece statik assetlerinde işe yarar. ama ana sunucuna daha hızlı ulaşılmasını sağlamaz.
doğru anlamışsın.
cpu threshold’a bağlarsan ki zaten öyle, bu cpu oraya çıktığında yeni sistem ekler. yeni sistemin eklenmesi de zaman alır. yavaş olur. biz insanlar işe geldiğinde hızlıca çalışsın istiyoruz. bunun hergün aynı saatte olduğunu da biliyoruz. o zaman ben o saatten biraz önce bu sayıyı 20’ye çıkartırsam bu sayede insanlar geldiğinde sistemler hazır olur ve yavaşlık yaşanmaz.

1 Beğeni

Hocam bu soru hakkinda ne dusunuyorsunuz?
A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase
access to premium, shared content that is stored in an S3 bucket. Upon payment, content will be available
for download for 14 days before the user is denied access.
Which of the following would be the LEAST complicated implementation?
A. Use an Amazon CloudFront distribution with an origin access identity (OAI). Configure the distribution
with an Amazon S3 origin to provide access to the file through signed URL’s. Design a Lambda function
to remove data that is older than 14 days.
B. Use an S3 bucket and provide direct access to the tile Design the application to track purchases in a
DynamoDH table. Configure a Lambda function to remove data that is older than 14 days based on a
query to Amazon DynamoDB.
C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin
to provide access to the file through signed URLs. Design the application to sot an expiration of 14 days
for the URL.
D. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin
to provide access to the file through signed URLs. Design the application to set an expiration of 60
minutes for the URL and recreate the URL as necessary.

Bir de bu soru var hocam. Dump’a gore cevaplarda D ve E denmis. Ama bana gore E ve C olmali. Siz ne dusunuyorsunuz?
A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2 instances
behind an Application Load Balancer and a relational database. The database should be highly available
and fault tolerant.
Which database implementations will meet these requirements? (Choose two.)
A. Amazon Redshift
B. Amazon DynamoDB
C. Amazon RDS for MySQL
D. MySQL-compatible Amazon Aurora Multi-AZ
E. Amazon RDS for SQL Server Standard Edition Multi-AZ

C. Least complicated diyor. Signed url kullanılacak teknoloji ama signed url’e expire date set edilemiyor bildiğim kadarıyla. O nedenle bu logici uygulama içinde halletmek en zahmetsiz yol olacaktır.

high-availability + fault tolerance = multi-az.

Hocam tekrar selamlar. Oncelikle cok tesekkur ederim. Bazen soruya yaklasimla ilgili bir sorun oluyor. Ben bu hafta sinava girmeyi hedefliyorum. Arada boyle soru post edecegim. Cevaplarsaniz ne mutlu bana. Zamaniniz olmaz sa da sorun degil.

Bu soru icin herkes Lamda@Edge demis. Soru da istenen High Availablity. Zaten accelaration icin Cloud Front kullaniliyor demis. O yuzden B’i hic dusunmuyorum. Benim de kafam karisti static bir contenti HA yapmak icin lambda kullanilabilir mi?
Bana C daha yakin geldi ama orada da load balancer gerekir diye dusunuyorum. Sizin dusunceniz nedir?

A company recently launched its website to serve content to its global user base. The company wants to
store and accelerate the delivery of static content to its users by leveraging Amazon CloudFront with an
Amazon EC2 instance attached as its origin.
How should a solutions architect optimize high availability for the application?
A. Use Lambda@Edge for CloudFront.
B. Use Amazon S3 Transfer Acceleration for CloudFront.
3A52A51D4DDEDF2CE379291908AA5BBD
C. Configure another EC2 instance in a different Availability Zone as part of the origin group.
D. Configure another EC2 instance as part of the origin server cluster in the same Availability Zone.

Bir baska garip soru daha. Bana gore cevap C. AZ’lerden biri gittiginde elinizde yine 4 aktif instance olmus olacak. Soruda da en az 4 diyor. Minimum 4 demiyor. Ama yine de kafa karistirici. Dump icinde dogru cevap icin B diyor. Yorumlar da insanlar A ve C arasinda kalmis. Siz ne dusunursunuz?
A company relies on an application that needs at least 4 Amazon EC2 instances during regular traffic and must scale up to 12 EC2 instances during peak loads.
The application is critical to the business and must be highly available.
Which solution will meet these requirements?

  • A. Deploy the EC2 instances in an Auto Scaling group Set the minimum to 4 and the maximum to M, with 2 in Availability Zone A and 2 in Availability Zone B.
  • B. Deploy the EC2 instances in an Auto Scaling group Set the minimum to 4 and the maximum to 12, with all 4 in Availability Zone A.
  • C. Deploy the EC2 instances in an Auto Scaling group Set the minimum to 8 and the maximum to 12, with 4 in Availability Zone A and 4 in Availability Zone B.
  • D. Deploy the EC2 instances in an Auto Scaling group Set the minimum to 8 and the maximum to 12 with all 8 in Availability Zone A.

Yine bir baska kafa karistirci soru. Dump icinde A ve B demis. Ama E secenegi de mantikli. Siz ne dusunursunuz?
A solutions architect is designing a customer-facing application. The application is expected to have a
variable amount of reads and writes depending on the time of year and clearly defined access patterns
throughout the year. Management requires that database auditing and scaling be managed in the AWS
Cloud. The Recovery Point Objective (RPO) must be less than 5 hours.
Which solutions can accomplish this? (Choose two.)
A. Use Amazon DynamoDB with auto scaling. Use on-demand backups and AWS CloudTrail.
B. Use Amazon DynamoDB with auto scaling. Use on-demand backups and Amazon DynamoDB Streams.
C. Use Amazon Redshift Configure concurrency scaling. Enable audit logging. Perform database
snapshots every 4 hours.
D. Use Amazon RDS with Provisioned IOPS. Enable the database auditing parameter. Perform database
snapshots every 5 hours.
E. Use Amazon RDS with auto scaling. Enable the database auditing parameter. Configure the backup
retention period to at least 1 day.

Cevap C. Lambda@edge olmaz statik içerik için buna gerek yok artı high availability ile alakası yok. HA için ikinci ec2 farklı az doğru olacaktır

Cevap C, yaklaşmışınız dogru

A ve B doğru. Read write scalability mevzusundan dolayı

Yine baska bir ilginclik. Bu soruya neredeyse herkes AC demis. Ama biz NAT gateway’i genelde private subnet icin kullaniriz. Benim cevabim C & D. Sizin fikriniz nedir?
A company is planning to migrate its virtual server-based workloads to AWS. The company has internetfacing
load balancers backed by application servers. The application servers rely on patches from an
internet-hosted repository
Which services should a solutions architect recommend be hosted on the public subnet? (Choose two.)
A. NAT gateway
B. Amazon RDS DB instances
C. Application Load Balancers
D. Amazon EC2 application servers
E. Amazon Elastic File System (Amazon EFS) volumes

A C doğru. application serverların internet facing bir subnette durmadığı açık. Load balancer arkasında private subnetteler. Bu durumda internette ulaşmaları için ya app load balancer kullanmaları ve bunun üstünden ulaşacakları yere ulaşmaları ya da nat gateway kullanmaları gerek.

Dump’a gore cevap A. Yorum yapan insanlar D demis. Bir kisim da B diyor :).
A solutions architect is helping a developer design a new ecommerce shopping cart application using AWS
services. The developer is unsure of the current database schema and expects to make changes as the
ecommerce site grows. The solution needs to be highly resilient and capable of automatically scaling read
and write capacity.
Which database solution meets these requirements?
A. Amazon Aurora PostgreSQL
B. Amazon DynamoDB with on-demand enabled
C. Amazon DynamoDB with DynamoDB Streams enabled
D. Amazon SQS and Amazon Aurora PostgreSQL

current db scheme belli değil ve değişebilir. relational olmaz o nedenle. dynamodb kullanılmalı ve autoscale için on-demand enable edilmeli

Dump’a gore cevap D. Yorumlarda B demisler genellikle. Ancak bana C gibi geliyor. Oncelikle Kinesis kismini hic dusunmuyorum. Cunku bir analiz veya medya store etmiyoruz. Egitimlerde verdiginiz orneklerden bu tarz bir yapiyi SNS ve SQS ile yapilabildigini hatirliyorum. B ve C benzer. Ama SNS’in isi push, SQS’in isi pull olduguna gore sanki C daha yakin bir opsiyon. Sizin fikriniz?

A company is designing a web application using AWS that processes insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type must be responded to within 24 hours, and must not be lost. The solution should be simple to set up and maintain.
Which solution meets these requirements’’

  • A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data stream. Configure each backend group of application servers to pool messages from its own data stream using the Kinesis Client Library (KCL).
  • B. Create multiple Amazon Simple Notification Service (Amazon SNS) topics and register Amazon SQS queues to their own SNS topic based on the quote type. Configure the web application to publish messages to the SNS topic queue. Configure each backend application server to work its own SQS queue.
  • C. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.
  • D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasticsearch Service (Amazon ES) cluster. Configure the web application to send messages to the proper delivery stream. Configure each backend group of application servers to search for the messages from Amazon ES and process them accordingly.

Dusunceniz?
A solutions architect is designing the cloud architecture for a new application being deployed to AWS. The
application allows users to interactively download and upload files. Files older than 2 years will be accessed
less frequently. The solutions architect needs to ensure that the application can scale to any number of files
while maintaining high availability and durability.
Which scalable solutions should the solutions architect recommend? (Choose two.)
A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier.
B. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3
Standard-Infrequent Access (S3 Standard-IA)
C. Store the files on Amazon Elastic File System (Amazon EFS) with a lifecycle policy that moves objects
older than 2 years to EFS Infrequent Access (EFS IA).
D. Store the files in Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the
volumes. Use the snapshots to archive data older than 2 years.
E. Store the files in RAID-striped Amazon Elastic Block Store (Amazon EBS) volumes. Schedule
snapshots of the volumes. Use the snapshots to archive data older than 2 years.