AWS GenAI Augmented Well-Architected Review:

AWS GenAI Augmented Well-Architected Review:


1. Well-Architected Review Overview

The AWS Well-Architected Review (WAR) is a structured, consistent framework designed to evaluate AWS workloads. Its core goal is to help cloud architects design and operate reliable, secure, efficient, cost-effective, and sustainable systems. The review is built upon six foundational pillars:

  • Operational Excellence: Focuses on managing and automating operations, monitoring systems, and improving processes through iteration.
  • Security: Prioritizes protecting data, systems, and assets through strong identity and access controls, encryption, and monitoring.
  • Reliability: Addresses recovery planning, fault isolation, system availability, and automatic recovery mechanisms.
  • Performance Efficiency: Encourages optimal use of resources and the ability to adapt to changing requirements.
  • Cost Optimization: Involves controlling where money is spent, avoiding unnecessary costs, and increasing awareness of usage patterns.
  • Sustainability: Promotes responsible resource consumption and environmental impact reduction through architectural and operational choices.

Traditionally, WAR is a manual, resource-intensive process involving extensive stakeholder interviews, documentation reviews, and subjective interpretation of best practices. This process often leads to inconsistencies and elongated timelines.

2. Introducing GenAI Augmented WAR

To modernize and streamline the Well-Architected Review process, with the help of Amazon Bedrock Claude FMs Generative AI (GenAI) capabilities, transforming manual assessments into automated, intelligent evaluations.

3. Solution Overview


Article content
Gen AI Augmented WAR Solution


Gen AI Augmented WAR Application Demo



3.1 Streamlit-Based User Interface for GenAI Application

Streamlit app provides user interface for uploading IaC templates to upload , perform analysis , display Well-Architected Review analysis and generate AWS Well-Architected Review Report.

  • Users are prompted to upload their infrastructure-as-code files (Terraform or CloudFormation templates). These files represent the actual cloud workloads that will be assessed for architectural review.
  • Once the user uploads their Terraform or CloudFormation file, the application invokes the function upload_file_to_s3() to validate and store it securely in an S3 bucket.
  • After a file is uploaded, users can initiate the "AWS WAR Analysis", which triggers a GenAI-driven evaluation via Amazon Bedrock. The app processes the uploaded template, checks for alignment with AWS best practices, and displays detailed results.

Once the analysis is complete, the user can choose to "Complete the WA Review and Generate Report", which performs three actions:

  1. Updates the Well-Architected Review in the AWS environment with the findings.
  2. Generates a risk summary across the five AWS Well-Architected pillars (Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization).
  3. Creates a downloadable PDF report, summarizing the review for well-architected review findings.

def upload_file_to_s3(uploaded_file, s3_bucket):
    try:
        file_type = get_file_type(uploaded_file.name)
        if not file_type:
            st.error("Unsupported file type. Please upload a valid Terraform or CloudFormation file.")
            return None

        if file_type == "terraform" and not validate_terraform_file(uploaded_file):
            st.error("Invalid Terraform configuration file.")
            return None

        # Generate a unique path in S3
        timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
        s3_key = f"uploads/{timestamp}/{uploaded_file.name}"
        
        s3_client.upload_fileobj(uploaded_file, s3_bucket, s3_key)
        file_url = f"https://{s3_bucket}.s3.{s3_client.meta.region_name}.amazonaws.com/{s3_key}"
        
        st.success("Your workload received successfully!")
        return {
            'url': file_url,
            'type': file_type
        }
    except ClientError as e:
        st.error(f"Error uploading file to S3: {e}")
        return None
    finally:
        if uploaded_file:
            uploaded_file.close()
def main():
    st.markdown("""
        <style>
        .main-title {
            font-size: 2.5rem;
            color: #232f3e;
            padding: 1rem 0;
            text-align: center;
            margin-bottom: 2rem;
            font-weight: bold;
        }
        </style>
    """, unsafe_allow_html=True)

    # Always display the title
    st.markdown('<h1 class="main-title">GenAI Augmented AWS Well-Architected Review 🏛️</h1>', unsafe_allow_html=True)
    if 'initialized' not in st.session_state:
        initialize_session_state()
        st.session_state.initialized = True
   
    best_practices_file_path = 'well_architected_best_practices.json'
    best_practices_csv_path = 'well_architected_best_practices.csv'
    
    try:
        s3_client.head_object(Bucket=s3_bucket, Key=best_practices_file_path)
        s3_client.head_object(Bucket=s3_bucket, Key=best_practices_csv_path)
    except ClientError:
        st.error("Required files not found in S3 bucket")
        return
    
    # File upload section
    uploaded_file = st.file_uploader(
        "Upload Workload Infrastructure as Code (Terraform or CloudFormation)",
        type=[ext[1:] for exts in SUPPORTED_FILE_TYPES.values() for ext in exts]
    )
    
    if uploaded_file is not None:
        file_info = upload_file_to_s3(uploaded_file, s3_bucket)
        
        if file_info:
            col1, col2 = st.columns(2)
            
            with col1:
                st.markdown("""
    <style>
    .stButton > button {
        background-color: #232f3e;  /* AWS navy blue */
        color: white;
        border: none;
        border-radius: 4px;
        padding: 0.5rem 1rem;
        font-weight: bold;
    }
    .stButton > button:hover {
        background-color: #1a2530;
        color: white;
    }
    .stButton > button:active {
        background-color: #131b24;
        color: white;
    }
    .stButton > button:disabled {
        background-color: #cccccc;
        color: #666666;
    }
    </style>
""", unsafe_allow_html=True)
                analyze_button = st.button(
                    "Perform AWS WAR Analysis",
                    key='analyze_button',
                    on_click=analyze_callback,
                    disabled=st.session_state.analyze_disabled
                )
            
            with col2:
                st.markdown("""
    <style>
    .stButton > button {
        background-color: #232f3e;  /* AWS navy blue */
        color: white;
        border: none;
        border-radius: 4px;
        padding: 0.5rem 1rem;
        font-weight: bold;
    }
    .stButton > button:hover {
        background-color: #1a2530;
        color: white;
    }
    .stButton > button:active {
        background-color: #131b24;
        color: white;
    }
    .stButton > button:disabled {
        background-color: #cccccc;
        color: #666666;
    }
    </style>
""", unsafe_allow_html=True)
                complete_review_button = st.button(
                    "Complete WA Review & Generate Report",
                    key='complete_review_button',
                    disabled=st.session_state.update_disabled
                )
            
            if file_info and analyze_button:
                if st.session_state.analyze_click == 1:
                    with st.spinner('Checking your workloads for AWS best practices...'):
                        analysis_results = analyze_template_with_bedrock(file_info, best_practices_file_path)
                        st.session_state.analyze_click += 1
                        st.session_state.analysis_result = analysis_results
                        
                        if st.session_state.analysis_result:
                            display_result(st.session_state.analysis_result, best_practices_csv_path)
                        else:
                            st.error("Failed to analyze the template. Please try again.")
                            st.session_state.update_disabled = True
                else:
                    display_result(st.session_state.analysis_result, best_practices_csv_path)
            
            # Combined WA Review and Report Generation
            if complete_review_button and st.session_state.analysis_result:
                if st.session_state.update_click == 1:
                    # Step 1: Update WA Review
                    with st.spinner('Updating Well-Architected Review...'):
                        status = update_workload(st.session_state.analysis_result, best_practices_csv_path)
                        if status == "Success":
                            st.success("Well-Architected Review updated successfully!")
                            st.session_state.update_click += 1

                            # Step 2: Display Risk Summary
                            with st.spinner('Generating Risk Summary...'):
                                pillar_summaries, total_questions, answered_questions = summarize_risks(workload_id, lens_alias)
                                display_risk_summary(pillar_summaries, total_questions, answered_questions)

                            # Step 3: Generate and Display Report
                            with st.spinner("Generating Well-Architected Report..."):
                                try:
                                    response = wa_client.get_lens_review_report(
                                        WorkloadId=workload_id,
                                        LensAlias=lens_alias
                                    )
                                    base64_string = response.get('LensReviewReport', {}).get('Base64String')
                                    if base64_string:
                                        b64 = base64.b64encode(base64.b64decode(base64_string)).decode()
                                        href = f'<a href="data:application/pdf;base64,{b64}" download="WA_Review_Report_{workload_id}.pdf">Click here to download the Well-Architected Review Report</a>'
                                        st.session_state.report_link = href
                                        st.markdown(href, unsafe_allow_html=True)
                                        st.success("Review completed! Click the link above to download your report.")
                                    else:
                                        st.error("Failed to generate the report.")
                                except Exception as e:
                                    st.error(f"Error generating report: {str(e)}")
                        else:
                            st.error(f"Error updating workload: {status}")
                            st.session_state.update_disabled = False
                else:
                    # Display saved results for subsequent clicks
                    pillar_summaries, total_questions, answered_questions = summarize_risks(workload_id, lens_alias)
                    display_risk_summary(pillar_summaries, total_questions, answered_questions)
                    st.markdown(st.session_state.report_link, unsafe_allow_html=True)
                    st.success("Review completed! Click the link above to download your report.")

def initialize_session_state():
    """Initialize all session state variables"""
    try:
        if 'initialized' in st.session_state:
            return
        session_vars = {
            'analysis_result': None,
            'analyze_disabled': False,
            'analyze_click': 1,
            'update_click': 1,
            'report_link': None,
            'update_disabled': True
        }
    
        for var, value in session_vars.items():
            if var not in st.session_state:
                st.session_state[var] = value
    except Exception as e:
        st.error(f"Error initializing session state: {str(e)}")
        st.stop()

def initialize_session_state():
    """Initialize all session state variables"""
    try:
        if 'initialized' in st.session_state:
            return
        session_vars = {
            'analysis_result': None,
            'analyze_disabled': False,
            'analyze_click': 1,
            'update_click': 1,
            'report_link': None,
            'update_disabled': True
        }
    
        for var, value in session_vars.items():
            if var not in st.session_state:
                st.session_state[var] = value
    except Exception as e:
        st.error(f"Error initializing session state: {str(e)}")
        st.stop()
if __name__ == "__main__":
    main()        

3.2 Well-Architected Review analysis with Amazon Bedrock

At the heart of the solution lies Amazon Bedrock, powered by the Claude 3 Sonnet foundation model. This intelligent backend analyzes IaC (Infrastructure as Code) and aligns it with AWS Well-Architected best practices.

After uploading, the function analyze_template_with_bedrock() takes over to perform a GenAI-based architectural assessment:

  • Best Practices Load It fetches a curated JSON of AWS Well-Architected best practices from S3.
  • Dynamic Prompt Generation Based on whether the uploaded file is Terraform or CloudFormation, it calls create_terraform_prompt() or create_cloudformation_prompt() respectively. These functions build an instructional prompt that includes:
  • Model Invocation via Amazon Bedrock Using Bedrock’s Anthropic Claude model, the app sends a structured prompt to evaluate how well the uploaded IaC adheres to the AWS best practices.

def analyze_template_with_bedrock(file_info, best_practices_json_path):
    model_id = BEDROCK_MODEL_ID
    
    try:
        # Load the best practices JSON
        response = s3_client.get_object(Bucket=s3_bucket, Key=best_practices_json_path)
        content = response['Body'].read().decode('utf-8')
        best_practices = json.loads(content)
        
        # Create appropriate prompt based on file type
        if file_info['type'] == "terraform":
            prompt = create_terraform_prompt(file_info['url'], best_practices)
        else:
            prompt = create_cloudformation_prompt(file_info['url'], best_practices)

        request_body = {
            "anthropic_version": "bedrock-2023-05-31",
            "max_tokens": 4096,
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "text",
                            "text": prompt
                        }
                    ]
                }
            ]
        }

        response = bedrock_client.invoke_model(
            modelId=model_id,
            contentType='application/json',
            accept='application/json',
            body=json.dumps(request_body)
        )
        if response.get('ResponseMetadata', {}).get('HTTPStatusCode') != 200:
            st.error("Failed to get response from Bedrock")
            return None
        response_body = json.loads(response['body'].read())
        analysis_content = response_body.get('content', [])
        
        analysis_result = "\n".join(
            item['text'] for item in analysis_content if item['type'] == 'text'
        )
        
        return analysis_result
    except Exception as e:
        st.error(f"Error analyzing template: {str(e)}")
        return None

def create_terraform_prompt(file_url, best_practices):
    return f"""
    Analyze the following Terraform configuration from URL: {file_url}
    
    For each of the following best practices from the AWS Well-Architected Framework,
    determine if it is applied in the given Terraform configuration.
    
    Best Practices:
    {json.dumps(best_practices, indent=2)}
    
    For each best practice, respond in the following EXACT format only:
    [Exact Best Practice Name as given in Best Practices]: [Why do you consider this best practice applicable?]
    
    IMPORTANT: Use the EXACT best practice name as given in the Best Practices.
    List only the practices which are Applied.
    Consider Terraform-specific implementations and resources.
    """

def create_cloudformation_prompt(file_url, best_practices):
    return f"""
    Analyze the following CloudFormation template from URL: {file_url}
    
    For each of the following best practices from the AWS Well-Architected Framework,
    determine if it is applied in the given CloudFormation template.
    
    Best Practices:
    {json.dumps(best_practices, indent=2)}
    
    For each best practice, respond in the following EXACT format only:
    [Exact Best Practice Name as given in Best Practices]: [Why do you consider this best practice applicable?]
    
    IMPORTANT: Use the EXACT best practice name as given in the Best Practices.
    List only the practices which are Applied.
    """        

  • Displaying Best Practices Mapped to AWS WAR Pillars

Once the uploaded infrastructure-as-code (IaC) file is analyzed using Amazon Bedrock, the resulting insights are organized and presented back to the user via the display_result() function.

This function does several critical tasks to bridge AI-generated insights with the AWS Well-Architected Framework (WAR) in a way that is readable, pillar-aligned, and useful for real-world decision-making.

Parse AI Output for Matched Best Practices

The GenAI response from Bedrock follows a strict format: [Best Practice Name]: Reason why it is applied.

It fetches a CSV file from S3 containing metadata on all AWS Well-Architected best practices, organized by:

  • Pillar (e.g., Operational Excellence, Security, Reliability)
  • Question under that pillar
  • Specific Best Practice mapped to that question

Map Practices to WAR Pillars in Real Time

For each pillar, the function:

  • Uses the AWS WAR API to fetch the corresponding PillarId for the workload.
  • Queries all answer summaries for that pillar using the list_answers() API.
  • Matches the AI-suggested practices against:

def display_result(analysis_results, file_path):
    pattern = re.compile(r'\[(.*?)\]:\s*(.*)')
    matches = pattern.findall(analysis_results)
    
    response = s3_client.get_object(Bucket=s3_bucket, Key=file_path)
    content = response['Body'].read().decode('utf-8')
    best_practices = pd.read_csv(StringIO(content))
    
    if best_practices.empty:
        st.error("No best practices could be loaded. Please check the file and try again.")
        return
    
    pillars = {}
    for index, row in best_practices.iterrows():
        pillar = row.get('Pillar', 'Unknown')
        question = row.get('Question', 'Unknown')
        practice = row.get('Best Practice', '')
        
        if pillar not in pillars:
            pillars[pillar] = {}
        if question not in pillars[pillar]:
            pillars[pillar][question] = []
        pillars[pillar][question].append(practice)
    
    st.title("BPs found in your architecture")
    for pillar, questions in pillars.items():
        with st.expander(f"**{pillar}**", expanded=False):

        # Get the pillar ID
            pillar_id = None
            lens_review_response = wa_client.get_lens_review(
                WorkloadId=workload_id,
                LensAlias=lens_alias
            )
            for pillar_summary in lens_review_response.get('LensReview', {}).get('PillarReviewSummaries', []):
                if pillar_summary.get('PillarName') == pillar:
                    pillar_id = pillar_summary.get('PillarId')
                    break
        
            if not pillar_id:
                print(f"Couldn't find PillarId for {pillar}. Skipping...")
                continue
            
            # Initialize pagination variables
            next_token = None
            
            while True:
                # Build the API request parameters
                params = {
                    'WorkloadId': workload_id,
                    'LensAlias': lens_alias,
                    'PillarId': pillar_id
                }
                if next_token:
                    params['NextToken'] = next_token
                
                # Get answers for each question under the current pillar
                answers_response = wa_client.list_answers(**params)
                
                for answer in answers_response['AnswerSummaries']:
                    question_title = answer['QuestionTitle']
                    selected_choices = answer['SelectedChoices']
                    
                    for question, practices in questions.items():
                        before_dash, separator, after_dash = question.partition(' - ')
                        if after_dash == question_title:
                            st.session_state.update_button_enabled = True
                            applied_practices = []
                            
                            choice_title_to_id = {choice['Title']: choice['ChoiceId'] for choice in answer.get('Choices', [])}
                        
                            for practice in practices:
                                practice_text = ' '.join(practice.split(' ')[1:]).strip()
                                if any(practice_text in choice['Title'] for choice in answer.get('Choices', []) if choice['ChoiceId'] in selected_choices):
                                    applied_practices.append((practice, "Previously Applied"))
                            
                                for key, reason in matches:
                                    if key.strip() == practice.strip():
                                        if not any(practice == item[0] for item in applied_practices):
                                            applied_practices.append((practice, reason))
                            # Display the question and its applied practices if any are applied
                            if applied_practices:
                                st.markdown(f"**{question}**")
                                st.session_state.update_button_enabled = True
                                for practice, reason in applied_practices:
                                    if reason == "Previously Applied":
                                        st.markdown(f"✔️ {practice}")
                                        #st.markdown(f"   Reason: {reason}")
                                    else:
                                        st.markdown(f"✔️ {practice}")
                                        #st.markdown(f"   Reason: {reason}")
            
                # Check if there are more results
                next_token = answers_response.get('NextToken')
                if not next_token:
                    break

    # Enable the update button at the end of the function
    st.session_state.update_button_enabled = True        


3.3 AWS Well-Architected review workload integration & Report generation

Once the AI-generated best practices are mapped to their respective AWS Well-Architected (WA) pillars, the next steps are crucial: updating the official WAR review, summarizing risks, and generating a downloadable PDF report. These functions enable a complete and compliant review cycle.

Reads Best Practices CSV from S3 to map AI-identified practices to their:

  • Pillar
  • Question
  • Choice title (answer option)

Creates a mapping like:{

'security': [

{ 'Question': 'How do you securely operate...', 'Practice': 'enable centralized identity...' }

]

}

Loops through all pillars in your current lens review and matches them with practices.

For each matched question

  • Fetches available answer choices.
  • Matches the best practice with a choice title.
  • Updates the question with new choices if they weren't already selected.

create milestone and save the review state

At the end of every update, a milestone is created to version the review state.

  • Milestone name includes the current date and time.
  • Uses wa_client.create_milestone() to store the state.

Summarise risks and Highlight Critical Gaps

Before generating the final report, this function helps quantify the current risk profile of the workload.

It provides:

  • Total questions vs. answered questions
  • Count of questions flagged as HIGH or MEDIUM risk
  • A detailed breakdown by pillar:

This is useful to prioritize remediation before workload deployment or review sign-off.

def update_workload(analysis_results, file_path):
    # Fetch workload and lens review details
    try:
        workload_response = wa_client.get_workload(WorkloadId=workload_id)
        lens_review_response = wa_client.get_lens_review(WorkloadId=workload_id, LensAlias=lens_alias)
    except ClientError as e:
        st.error(f"Invalid workload ID: {workload_id}")
        return
    # Read best practices from S3
    response = s3_client.get_object(Bucket=s3_bucket, Key=file_path)
    content = response['Body'].read().decode('utf-8')
    best_practices = pd.read_csv(StringIO(content))

    # Parse analysis results
    analysis_bp_list = [key for key, value in re.findall(r'\[(.*?)\]:\s*(.*?)', analysis_results)]

    # Create mappings from Best Practice to Pillar and Question
    practice_to_pillar_question = {}
    for index, row in best_practices.iterrows():
        pillar = row.get('Pillar', '').strip().lower()
        question = row.get('Question', '').strip().lower()
        practice = row.get('Best Practice', '').strip()
        
        # Remove all spaces from the pillar
        pillar_no_spaces = pillar.replace(' ', '')
        # Initialize the dictionary entry if it does not exist
        if pillar_no_spaces not in practice_to_pillar_question:
            practice_to_pillar_question[pillar_no_spaces] = []

        for bp in analysis_bp_list:
            if bp == practice:
                practice_text = ' '.join(practice.split(' ')[1:]).strip().lower()
                before_dash, separator, after_dash = question.partition(' - ')
                practice_to_pillar_question[pillar_no_spaces].append({
                        'Question': after_dash,
                        'Practice': practice_text
                })

    # Iterate over Pillar IDs from the Lens Review response
    for pillar_summary in lens_review_response.get('LensReview', {}).get('PillarReviewSummaries', []):
        pillar_id = pillar_summary.get('PillarId', 'No PillarId')
        print(f"Processing Pillar ID: {pillar_id}")

        # Initialize pagination variables
        next_token = None

        while True:
            try:
                # Build the API request parameters
                params = {
                    'WorkloadId': workload_id,
                    'LensAlias': lens_alias,
                    'PillarId': pillar_id
                }
                if next_token:
                    params['NextToken'] = next_token
                
                # Get questions for this pillar
                questions_response = wa_client.list_answers(**params)
                
                # Print the response for debugging
                #print(f"Questions response: {json.dumps(questions_response, indent=4)}")

                # Process questions
                for question in questions_response.get('AnswerSummaries', []):
                    question_id = question.get('QuestionId', 'No QuestionId')
                    question_title = question.get('QuestionTitle', 'No QuestionTitle')
                    current_choices = question.get('SelectedChoices', [])
                    updated_choices = current_choices
                    print(f"Processing Question: {question_title}")

                    # Iterate over the details list for the current pillar
                    for key in practice_to_pillar_question.keys():
                        if key.startswith(pillar_id.lower()):
                            print(f"Key matched: {key}")
                            for entry in practice_to_pillar_question[key]:
                                practice1 = entry.get('Practice', 'No Practice')
                                question1 = entry.get('Question', 'No Question')
                                new_choice_ids = []
                                if question1 == question_title.lower():
                                    #print(f"Question matched: {question1}")
                                    choice_title_to_id = {choice['Title']: choice['ChoiceId'] for choice in question.get('Choices', [])}
                                    for new_choice_title, choice_id in choice_title_to_id.items():
                                        if new_choice_title.lower() == practice1:
                                            print(f"Practice matched: {practice1}")
                                            new_choice_ids.append(choice_id)
                                    #print(f"new_choice_ids = {new_choice_ids}")

                                    updated_choices = list(set(updated_choices + new_choice_ids))  # Remove duplicates
                                    # Update the answer with the merged choices
                                    wa_client.update_answer(
                                       WorkloadId=workload_id,
                                       LensAlias=lens_alias,
                                       QuestionId=question_id,
                                       SelectedChoices=updated_choices,
                                       Notes='Updated during review process'
                                    )
                                    print(f"Updated Question Title: {question_title} with Choices: {updated_choices}")

                # Check if there is a next token
                next_token = questions_response.get('NextToken')
                if not next_token:
                    break  # Exit the loop if no more pages are available

            except ClientError as e:
                print(f"Error retrieving or updating answers for Pillar ID {pillar_id}: {e}")
                return e

    create_milestone()
    st.session_state.report_button_enabled = True
    return "Success"

def create_milestone():
    # Define a milestone name with current date and time
    current_datetime = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
    milestone_name = f'Review completed on {current_datetime}'
    client_request_token = str(uuid.uuid4())  # Generate a unique client request token
  
    try:
        milestone_response = wa_client.create_milestone(
            WorkloadId=workload_id,
            MilestoneName=milestone_name,
            ClientRequestToken=client_request_token
        )
        print("Milestone created")

    except Exception as e:
        print(f"Error creating milestone: {e}")

def summarize_risks(workload_id, lens_alias):
    # Initialize counters for different risk levels
    pillar_summaries = {}
    total_questions = 0
    answered_questions = 0

    # Retrieve all pillars for the lens review
    lens_review_response = wa_client.get_lens_review(
        WorkloadId=workload_id,
        LensAlias=lens_alias
    )

    # Loop through each pillar and list answers for each pillar
    for pillar_summary in lens_review_response.get('LensReview', {}).get('PillarReviewSummaries', []):
        pillar_id = pillar_summary.get('PillarId', 'No PillarId')
        pillar_name = pillar_summary.get('PillarName', 'Unknown Pillar')

        pillar_summaries[pillar_id] = {
            'name': pillar_name,
            'total': 0,
            'answered': 0,
            'high': 0,
            'medium': 0,
        }

        # Initialize pagination variables
        next_token = None

        while True:
            try:
                # Build the API request parameters
                params = {
                    'WorkloadId': workload_id,
                    'LensAlias': lens_alias,
                    'PillarId': pillar_id
                }
                if next_token:
                    params['NextToken'] = next_token
                
                # Get answers for each question under the current pillar
                answers_response = wa_client.list_answers(**params)

                for answer_summary in answers_response.get('AnswerSummaries', []):
                    pillar_summaries[pillar_id]['total'] += 1
                    total_questions += 1
                    risk = answer_summary.get('Risk', 'UNANSWERED')
                    if risk != 'UNANSWERED':
                        pillar_summaries[pillar_id]['answered'] += 1
                        answered_questions += 1
                    if risk == 'HIGH':
                        pillar_summaries[pillar_id]['high'] += 1
                    elif risk == 'MEDIUM':
                        pillar_summaries[pillar_id]['medium'] += 1

                # Check if there is a next token
                next_token = answers_response.get('NextToken')
                if not next_token:
                    break  # Exit the loop if no more pages are available

            except ClientError as e:
                print(f"Error retrieving answers for Pillar ID {pillar_id}: {e}")
                break  # Exit the loop on error to prevent infinite retries

    return pillar_summaries, total_questions, answered_questions


def display_risk_summary(pillar_summaries, total_questions, answered_questions):
    # Display the summary of risks on the Streamlit interface
    st.subheader("Risk Summary")
    st.markdown(f"Questions Answered: {answered_questions}/{total_questions}")
    
    # Initialize counters for overall risk levels
    total_high = 0
    total_medium = 0
    
    # Sum up the risks across all pillars
    for pillar_data in pillar_summaries.values():
        total_high += pillar_data['high']
        total_medium += pillar_data['medium']
    
    # Display overall risk metrics
    col1, col2 = st.columns(2)
    col1.markdown(f"<h3 style='color: red;'>High Risks: {total_high}</h3>", unsafe_allow_html=True)
    col2.markdown(f"<h3 style='color: orange;'>Medium Risks: {total_medium}</h3>", unsafe_allow_html=True)
    
    # Display risk breakdown by pillar in a table
    st.subheader("Risk Breakdown by Pillar")
    
    # Prepare data for the table
    table_data = []
    for pillar_id, pillar_data in pillar_summaries.items():
        table_data.append({
            "Pillar": pillar_data['name'],
            "Questions Answered": f"{pillar_data['answered']}/{pillar_data['total']}",
            "High Risks": pillar_data['high'],
            "Medium Risks": pillar_data['medium'],
        })
    
    # Create a DataFrame and display it as a table
    df = pd.DataFrame(table_data)
    df = df.reset_index(drop=True)
    
    html = df.to_html(index=False)

    st.markdown(html, unsafe_allow_html=True)

#Functions related to Generate Button
def generate_and_download_report(workload_id, lens_alias):
    try:
        # Generate the report using GetLensReviewReport API
        response = wa_client.get_lens_review_report(
            WorkloadId=workload_id,
            LensAlias=lens_alias
        )
        
        # Extract the Base64 encoded report data
        base64_string = response.get('LensReviewReport', {}).get('Base64String')
        
        if not base64_string:
            st.error("Failed to retrieve the report data.")
            return None
        
        # Decode the Base64 string
        report_data = base64.b64decode(base64_string)
        
        # Create a download link
        b64 = base64.b64encode(report_data).decode()
        href = f'<a href="data:application/pdf;base64,{b64}" download="WA_Review_Report_{workload_id}.pdf">Click here to download the report</a>'
        st.markdown(href, unsafe_allow_html=True)
        return "Report generated successfully"
    except ClientError as e:
        error_code = e.response['Error']['Code']
        error_message = e.response['Error']['Message']
        st.error(f"AWS Error: {error_code} - {error_message}")
        if error_code == "ValidationException":
            st.error("Please check if the WorkloadId and LensAlias are correct.")
        elif error_code == "ResourceNotFoundException":
            st.error("The specified workload or lens was not found.")
        elif error_code == "AccessDeniedException":
            st.error("You don't have permission to perform this operation. Check your IAM policies.")
        else:
            st.error("Please check your AWS credentials and permissions.")
        return None
    except Exception as e:
        st.error(f"Unexpected error: {str(e)}")
        return None        

Display risk summary and show risk dashboard in Streamlit

This visualizes the risk data in a clean, Streamlit-friendly UI:

  • Shows total answered vs. total questions.
  • Displays total high/medium risk counts.
  • Renders a DataFrame as an HTML table summarizing:

This real-time dashboard gives teams a one-glance view of where the architecture may be vulnerable.

Generate , download and export report to PDF

Finally, the app supports on-demand PDF generation via the get_lens_review_report() API.

  • Fetches the report as a base64 string.
  • Decodes it and renders a download link.
  • The file is named using the workload ID and is instantly accessible.

This feature allows stakeholders to download and share a WAR-compliant report without needing to log into the AWS Console.

4. Key Benefits of GenAI Augmented WAR

Automated Analysis:

  • Instant scanning of IaC files.
  • Objective evaluation of best practice adherence.
  • Reduces reliance on manual code reviews and stakeholder interviews.

Time Efficiency:

  • Transforms a multi-day process into minutes.
  • Automates documentation and report creation.

Consistency & Accuracy:

  • Standardized evaluation rules ensure consistent assessments.
  • Reduces human error and subjective interpretation.
  • Supports reproducibility for audit and compliance.

Enhanced Coverage:

  • Analyzes entire codebases rather than limited samples.
  • Detects subtle design flaws and improvement areas.
  • Aligns assessments with current AWS best practices and reference architectures.

5. Conclusion

The GenAI augmented Well-Architected Review redefines how AWS workloads are assessed. By leveraging advanced AI models, automation, and streamlined interfaces, this solution eliminates manual inefficiencies and subjective interpretations. It offers cloud architects a fast, consistent, and intelligent path to ensure workloads align with AWS best practices. This approach not only accelerates the review process but also deepens architectural insight, ensuring scalable, secure, and efficient cloud operations for the future.

To view or add a comment, sign in

More articles by Rajesh Natesan

Insights from the community

Others also viewed

Explore topics