You can access AZ-303 Exam Microsoft official page here
Preparing for the AZ-303 Microsoft Azure Architect Technologies exam? Don’t know where to start? This post is the AZ-303 Study Guide, which helps you to achieve the Microsoft Azure Certified Solutions Architect expert certification.
Note: You also need to clear AZ-304 to achieve this certification.
This post contains a curated list of articles from Microsoft documentation for each objective of the AZ-303 exam. Please share the post within your circles so it helps them to prepare for the exam.
For each test objective, I’ve compiled a thorough collection of articles from Microsoft documentation. Please feel free to forward this page to your friends and coworkers in order to assist them in preparing for the test .
Create an Azure Storage table in the Azure portal
Create a table dynamically with the .NET SDK (Table API)
Notes:
Don’t get confused with the Table API in Cosmos DB and Azure Table storage. They both share the same data model and expose similar query operations through their SDKs.
But, Table API in Cosmos DB has premium capabilities like global distribution, throughput & high availability. So, you should look to migrate your existing app to Table API, given a chance.
Review the Learning Path: Choose the appropriate API for Azure Cosmos DB
Notes:
Cosmos DB is a Multi-Model Database Service. It means that you can build any of the NoSQL database models with the following APIs:
1. Gremlin (Graph) API – To describe the relationship between entities.
2. Azure Table API – use only to migrate applications using Azure Table Storage to Cosmos DB. Else just avoid.
3. MongoDB API – If your project is already using MongoDB, use this API. Migration is as simple as just updating the connection string.
4. Cassandra API – If your team already uses Cassandra DB / skillful of Cassandra Query Language (CQL), use this API.
5. Core SQL API – For all other cases & for new projects, use SQL API. Superior in functionality to other APIs. When in doubt, use Core SQL.
A Pluralsight module on understanding global distribution & replication
Add/remove regions from your Cosmos DB account
Configure Multiple write-regions
Configure Multi-master in your app (To write to the nearest write location)
Notes:
a. Why data replication is important in Azure Cosmos DB?
1. To reduce the latency of your application. If you have a global audience, then the users farther from the database may experience high latency (time duration between request & response). By enabling Cosmos DB replication, you direct the request to the nearest data center. The SDK will make sure of that.
2. Replication enables Business Continuity. If there is a natural disaster in a data center, you know the data is safe elsewhere.
b. In addition, to read replication, you can set up multi-region writes. But why? Same reason! To reduce write latency. But, this may cause conflicts as the data is updated in different regions.
Configure Azure SQL database settings
Configure Server-level IP firewall rules
Configure security features of Azure SQL Database like:
a. Advanced data security (Detects security threats like SQL injection)
b. Auditing (Tracks & logs database events to gain insights into discrepancies)
c. Dynamic data masking (Hides sensitive data in your DB)
d. Transparent Data Encryption [TDE] (Encryption at rest)
Notes:
You need to open port 1433 if you try to connect the Azure SQL database from your system (with a client tool like SSMS).
You can create Server-level firewall rules in the Azure portal and T-SQL (with SSMS). Database-level firewall rules can be configured with only T-SQL statements.
Server-level firewall rules apply to all the databases in the server & they are created in the master database. The rules for the database-level firewall are stored in the individual database making them easily portable.
Getting started with Azure SQL Managed Instance
Creating an Azure SQL Database Managed Instance
Notes:
Best used for migrating existing on-premises applications with minimal effort (lift-and-shift). Provides the latest stable DB engine version.
Azure SQL Managed Instance = Best of Azure SQL Database + Best of SQL Server on Azure VM
High-availability for Azure SQL Database
Notes:
What High Availability ensures for Azure SQL Database?
That data is immune to failures.
SQL, Windows maintenance operations do not impact the workload.
High-availability models available:
Standard: Basic, Standard & General Purpose tiers use the standard model: Two layers – a stateless compute layer & a stateful data layer (the .mdf & .ldf files) stored in Azure premium storage (built-in high availability). In the case of failure, Azure Service Fabric kickstarts another stateless compute node. Not suitable for a heavy workload, as the new compute node does not have any files (cold cache).
Premium (leveraged by Premium & Business Critical service tiers): Unlike the previous model, both the compute and the storage is in the same node. This node is replicated 3-4 times (others are secondary nodes) to provide high availability (implemented with Always On availability groups).
Additional benefits of Premium availability model:
Read Scale-Out: You can redirect read operations to the secondary nodes
Availability Zones: You can place the databases in availability zones so the data is replicated across data centers in a region. Although the data is immune to data center-specific failures, you may observe network latency (due to distance between data centers) as transactions are committed across availability zones.
This brings us to the end of the AZ-303 Microsoft Azure Architect Technologies Study Guide