GitHub Copilot Certification Exam Prep

There is an exciting 4-Week Course on GitHub Community to help developers master GitHub Copilot & prepare for the GitHub Copilot Certification Exam.

The threads contain a very useful curation of study material spanning the 7 Domains that will be covered in the exam & a knowledge check with practice questions for each of the 4 weeks. Click on the links below corresponding to the week numbers for the material.

I plan to compile the reasoning for the answers to the practice questions here so that the notes can be quick reference for the exam & later -

Week 1

  1. GitHub Copilot is trained on publicly available open-source repositories and large proprietary datasets curated by GitHub. It does not use private repositories of individual users. Documentation and Stack Overflow discussions are not primary data sources for Copilot.
  2. GitHub Copilot enhance a developer's productivity during pair programming by providing inline code suggestions based on context
  3. Key ethical considerations when using GitHub Copilot for software development:
    1. Ensuring the code adheres to licensing requirements. Licensing issues are critical when using AI-generated code. Copilot may generate code based on public repositories or other code sources, so it’s essential to ensure that the generated code complies with licensing and usage terms.
    2. Regularly reviewing AI-generated code for potential bias
  4. To mitigate security risks when using Copilot developers should conduct thorough code reviews before deployment to ensure that no security vulnerabilities or errors are introduced. Relying solely on AI suggestions without validation can pose risks to the integrity and security of the project.
  5. Features exclusive to GitHub Copilot Enterprise compared to Business
    1. Integration with Microsoft Entra ID (Azure AD) to manage authentication and access for enterprise users
    2. Admin-level policy management allows enterprise admins to have finer control over the usage of Copilot in their organization
  6. Both the Business and Enterprise plans offer enterprise-grade security and privacy features.
  7. Developers can maximize the accuracy of GitHub Copilot's code suggestions by writing clear and descriptive function names and comments. It ensures that Copilot can generate contextually appropriate and semantically correct code, boosting productivity and code quality.
  8. The following actions reflect responsible AI usage with GitHub Copilot:
    1. Using code suggestions as a reference and refining manually
    2. Ensuring the AI-generated code meets compliance standards
  9. The GitHub Copilot for Business subscription plan allows GitHub Copilot usage in a business environment. It offers features like centralized billing and team management, which are not available in the individual plans.
  10. Steps to enable Copilot in VS Code:
    1. Install the GitHub Copilot extension
    2. Link a GitHub account with an active Copilot subscription
  11. GitHub Copilot handles different programming paradigms by adapting suggestions based on project context and code style
  12. GitHub Copilot contributes to responsible AI usage by:
    1. Providing proper attribution for AI-generated code to maintain transparency and respect intellectual property rights
    2. Encouraging continuous user feedback to refine AI behavior, enhance accuracy, and mitigate potential biases in suggestions
  13. Recommended practice for using GitHub Copilot effectively in a team setting:
    1. Establishing clear guidelines for Copilot usage
    2. Regular knowledge-sharing sessions on Copilot best practices

Week 2

1) GitHub Copilot for Enterprise allows organizations to configure custom LLM training with proprietary data. With GitHub Copilot Enterprise, you can fine-tune a private, custom model, which is built on a company’s specific knowledge base and private code. Organizations that utilize GitHub Copilot Enterprise’s custom models enable more accurate and contextually relevant suggestions and responses. Custom models for GitHub Copilot Enterprise is in public preview and is subject to change.

Copilot Business and Enterprise both support SOC 2 Type II framework

Both Copilot Business and Enterprise ensures zero data retention for AI-generated completions. The GitHub Copilot extension in the code editor does not retain your prompts for any purpose after it has provided Suggestions, unless you are a Copilot Pro or Copilot Free subscriber and have allowed GitHub to retain your prompts and suggestions. 

2) GitHub Copilot does NOT provide a suggestion when the user is writing a private, proprietary API call with no prior examples in public repositories and also when the user is using Copilot in VS Code, but their organization has Copilot completions disabled at the repository level. This is an administrative restriction that overrides user-level settings

While hitting a rate limit might temporarily pause your ability to use Copilot Chat. This would be a temporary condition, not a scenario where Copilot would never provide suggestions. Once the rate limit period expires, suggestions would resume. This is not a definitive scenario where Copilot would NOT provide a suggestion.

3) Agent Mode can automatically recognize and correct errors in its own code suggestions.

In agent mode, Copilot gathers context across multiple files, suggests and tests edits, and validates changes for your approval, so you can make comprehensive updates with speed and accuracy.

4) Copilot filters sensitive data using a heuristic-based approach before processing. The pre-processed prompt is then passed through the Copilot Chat language model, which is a neural network that has been trained on a large body of text data. 

As suggestions are generated and before they are returned to the user, Copilot applies an AI-based vulnerability prevention system that blocks insecure coding patterns in real-time to make Copilot suggestions more secure. Our model targets the most common vulnerable coding patterns, including hardcoded credentials, SQL injections, and path injections.

Input prompts and output completions are run through content filters.

Copilot processes prompts flagged for PII. This happens on the Proxy server hosted on GitHub-owned Azure tenants.

5) The suggestion most frequently accepted by developers in similar contexts is ranked higher. Simply matching common patterns isn't the primary driver for ranking suggestions. Copilot balances common patterns with the specific context of the code being written. 

To generate a code suggestion, the Copilot extension begins by examining the code in your editor_—_focusing on the lines just before and after your cursor, but also information including other files open in your editor and the URLs of repositories or file paths to identify relevant context. That information is sent to Copilot’s model, to make a probabilistic determination of what is likely to come next and generate suggestions.  

To generate a suggestion for chat in the code editor, the Copilot extension creates a contextual prompt by combining your prompt with additional context including the code file open in your active document, your code selection, and general workspace information, such as frameworks, languages, and dependencies. That information is sent to Copilot’s model, to make a probabilistic determination of what is likely to come next and generate suggestions.

6) The Copilot Proxy plays a crucial role in the GitHub Copilot data pipeline by routing and processing user requests before sending them to the LLM.

7) A user can experience delayed Copilot completions if the IDE's context window exceeds Copilot's processing limit and Copilot’s rate limits have been exceeded, causing temporary delays.

8) GitHub Copilot is most likely to produce incorrect or “hallucinated” code when low-confidence completions are not filtered out. Low-confidence completions occur when Copilot lacks sufficient training data, leading to incorrect or unpredictable results.

9) When using Copilot-generated code be aware that Copilot does not automatically sanitize user input, increasing the risk of injection attacks.

10) GitHub Copilot Chat is supported in JetBrains IDEs, Visual Studio, Visual Studio Code & Xcode. These platforms support AI-powered chat, allowing developers to ask coding-related questions, request explanations, and generate code directly in their development environment. 

GitHub Codespaces and the GitHub Web UI do not support Copilot Chat at this time. You can use GitHub Copilot in GitHub Codespaces by adding a VS Code extension but it does not currently support Copilot Chat functionality.

11) A key limitation of Copilot’s CLI integration compared to IDE-based Copilot features is that it lacks access to context from open files in the IDE. This limits its ability to provide deeply relevant suggestions. It does not exclusively work with Git commands, automatically commit, or restrict itself to single-line commands.

12) The key limitations of GitHub Copilot’s LLM-based code generation are that it struggles with complex multi-step reasoning, often requiring developer intervention for logical correctness and it can produce incomplete or syntactically incorrect suggestions, especially in low-resource programming languages. GitHub Copilot’s LLM-based code generation is powerful but has limitations in complex multi-step problem-solving, often requiring developer oversight to ensure logical correctness. Additionally, in low-resource programming languages, Copilot’s training data may be insufficient, leading to incomplete or incorrect syntax. Its suggestions are not deterministic, and it does not guarantee prevention of licensing conflicts with GPL-licensed code.

13) The recommended best practice for an organization implementing Copilot for Business is to configure organization-wide policies to enforce responsible AI usage. 

14) Copilot for Enterprise be customized for an organization’s internal workflows by training Copilot on internal, proprietary codebases for better context and configuring Copilot Knowledge Bases for internal documentation lookups. 

GitHub Copilot for Enterprise allows organizations to enhance AI-generated suggestions by training Copilot on proprietary codebases, ensuring better alignment with internal best practices. 

Additionally, Copilot Knowledge Bases help integrate internal documentation, allowing developers to query internal workflows, coding standards, and best practices efficiently. Restricting Copilot to pre-approved open-source licenses or limiting it to past completions are not supported methods of customization.

Week 3

1 A,C) Using Copilot to generate code for encryption algorithms without verification and accepting Copilot-generated code that lacks licensing information. could introduce legal or security risks when using GitHub Copilot in a corporate environment.

2 A) To optimize GitHub Copilot’s inline chat for debugging, rewrite prompts to be more detailed, explicitly mentioning expected outputs.

3 D) Copilot might suggest a less efficient sorting algorithm if not prompted explicitly.

4 C) The most likely reason a GitHub Copilot suggested function uses recursion but lacks memoization while computing Fibonacci numbers is that lacks context about performance optimizations unless explicitly prompted.

Memoization is an optimization technique that might not be included in the most straightforward or common implementations found in training data, especially if the function is simple and lacks context about performance needs.

5 B) When using GitHub Copilot for SQL query generation, Copilot might suggest queries vulnerable to SQL injection.

6 B) GitHub Copilot is designed to generate code suggestions based on the context of the codebase it is working within. When enabled in a private repository, Copilot uses machine learning models trained on a vast amount of code to generate suggestions that align with the coding patterns and styles observed in the project’s existing files. This means that the suggestions it provides are tailored to fit the specific context of the project, rather than retrieving code from similar open-source projects or external sources. 

Copilot doesn't directly fetch external code in real-time 

7 B) Copilot generates responses based on the input it receives. If the initial prompt lacks specific requirements about performance and indexing, the generated query might not include these considerations. Re-prompting with more detailed requirements allows Copilot to understand the complete context and generate a more optimized query. You can specify performance requirements, expected data volume, and indexing needs in the new prompt.

8 B) To yield the best results for generating optimized SQL queries using Copilot, the prompt should be specific, clear, and detailed about the optimization requirements. 

9 A, B) The best approach to prevent Copilot from suggesting code similar to public implementations of a proprietary algorithm is to use private repositories with Copilot to limit exposure to public code patterns and explicitly describing unique constraints and design principles in inline comments.

Turning off Copilot suggestions for functions with proprietary logic is overly restrictive 

Users cannot modify Copilot's training dataset - this is controlled by GitHub/Microsoft

10 A,D)The factors that contribute most significantly to increased developer productivity when using GitHub Copilot are:

  • Reduction in the time spent searching for solutions on external websites.
  • Faster onboarding of new developers due to AI-assisted code understanding.

While increased number of lines of code written per hour and a higher frequency of completed pull requests per developer are beneficial outcomes of using Copilot, they are not the primary contributors to increased developer productivity when compared to the above 2 factors. 

11 A,B) The factors that most significantly influence GitHub Copilot’s ability to generate high-quality completions are the recency of the training data used to build the model and the user's coding patterns and past accepted suggestions. 

12 A,B) To improve the relevance of suggestions from GitHub Copilot in Visual Studio Code, the most effective strategies would be:

  • Add a comment explicitly mentioning the latest API version before the function definition. 
  • Use natural language prompts that describe the intent of the code rather than function names.

13 A) The best way to guide GitHub Copilot to generate better completions for a machine learning model in Python, especially when it comes to suggesting suboptimal hyperparameter choices, is to provide explicit guidance within the code itself. This can be effectively done by using inline comments to specify preferred hyperparameters and model architectures. By doing so, the developer directly communicates their preferences and requirements to Copilot, allowing it to generate more tailored and appropriate suggestions.

Week 4

1. B, C) When leveraging GitHub Copilot for unit testing, the most effective strategies to ensure the generated test cases cover edge cases are:

  • Providing explicit comments describing edge case scenarios before writing the function
  • Using a combination of property-based testing and manually crafted assertions
Providing explicit comments ensures that Copilot understands edge case requirements. Property-based testing helps verify a wider range of inputs, complementing Copilot’s generated cases.

2 B) When dealing with floating-point precision errors in test cases suggested by Copilot, the most effective approach is to modify the assertions to account for a precision tolerance. This is because floating-point calculations can often result in small discrepancies due to the inherent imprecision of representing decimal numbers in binary. By allowing for a small margin of error (tolerance) in the assertions, you can make the test more robust and less prone to failing due to minor precision issues. 

Python's math.isclose function can be used to compare two floating-point numbers with a specified tolerance.

Adjusting assertions with a tolerance (e.g., using assertAlmostEqual) ensures reliable comparisons.

3 A) To ensure that Copilot-generated tests follow a Test-Driven Development (TDD) workflow, the best approach is to write failing test cases before implementing the function. This aligns with the core principles of TDD, which emphasizes writing tests before writing the actual code. 

While it sounds appealing, there is currently no specific "TDD mode" in Copilot.

4 B) GitHub Copilot may generate non-functional or incorrect test cases if incomplete function definitions without parameter types are provided. This is because Copilot may not be able to infer the correct data types, function behavior, or edge cases.

Copilot doesn't work offline at all, so without an internet connection it won't generate any code rather than incorrect code.

Using Copilot with multiple test frameworks in the same file can confuse Copilot, leading to inconsistent suggestions.

5 A,C)To improve debugging efficiency:

  • Ask Copilot Chat to explain unexpected test failures
  • Provide error logs to Copilot Chat for step-by-step troubleshooting
Copilot Chat can analyze errors and provide explanations, but human validation is essential. Providing error logs improves Copilot’s debugging assistance.

6 B) You are working on an API that processes large JSON payloads. If Copilot keeps generating tests with small sample data, provide comments specifying ‘test with large JSON data’ before invoking Copilot. By explicitly mentioning "test with large JSON data" in comments, you signal the intent to generate tests with substantial payloads.

7 B) The primary function of GitHub Copilot’s content exclusion settings is to restrict Copilot from suggesting code similar to private repositories.

When you exclude certain files or directories, GitHub Copilot won't use the content in those files to inform its suggestions. This action can lead to more secure and compliant code suggestions.  It's essential to carefully analyze which files should be excluded to balance security and functionality.

8 B, D) Even with security settings enabled, GitHub Copilot may still generate code that uses outdated or insecure cryptographic practices, such as MD5 hashing, if it's trained on codebases that use these practices. This is because Copilot's primary goal is to generate code that is similar to what it has seen before, rather than to ensure the security of the generated code. 

Similarly, if Copilot is trained on public repositories that contain insecure code patterns, it may generate suggestions that replicate these patterns, even if security settings are enabled. This is because Copilot's training data is sourced from public repositories, which may not always reflect the latest security best practices.

While Copilot Chat can help find some common security vulnerabilities and help you fix them, you should not rely on Copilot for a comprehensive security analysis. Using security tools and features will more thoroughly ensure your code is secure. 

9  B) If Copilot consistently misses testing for integer overflow errors, explicitly add a comment describing edge cases, like integer overflows, before invoking Copilot 

By explicitly mentioning integer overflow cases in comments, you're directly guiding Copilot to consider these scenarios. Comments serve as prompts that influence the generated code's focus and coverage.

Copilot doesn't have configuration settings for test generation strictness.

10 B) When writing JavaScript tests with Jest, you can ensure Copilot generates async test cases correctly by providing comments specifying ‘test async functions’.

Comments like "// async test" or "// test async function" signal to Copilot that the test involves asynchronous operations

Jest is an open source project maintained by Facebook, and it's especially well suited for React code testing.

11 C) If Copilot Chat in Visual Studio Code is not suggesting relevant test scenarios, a likely reason is that Copilot is missing necessary function comments or contextual prompts.

GitHub Copilot Chat uses your code's context and semantics to suggest assertions that ensure the function is working correctly. 

12 C) GitHub Copilot Enterprise offers enterprise-wide content exclusion settings for security-sensitive codebases.

Organizations and enterprises with a subscription to GitHub Copilot Business or GitHub Copilot Enterprise can prevent Copilot from accessing certain content.

13 B) If you want Copilot to generate secure cryptographic implementations, copying Copilot’s first suggestion and manually verifying security is the best approach among the provided options. It balances Copilot’s assistance with human verification. This ensures that any potential vulnerabilities are identified and addressed before deployment.

While specifying ‘Use secure cryptographic methods’ in comments can guide Copilot, it does not guarantee that the generated code will be secure as the prompt is too generic. Copilot may not fully interpret or implement the security requirements as intended.

There is no explicit "security-enhanced mode" to enable. 

As the question explicitly states "You want Copilot to generate secure cryptographic implementations." disabling Copilot when writing encryption functions entirely doesn't seem like a good option.

14 B) The most critical issue with this test case:
def test_authenticate_user():
    assert authenticate("admin", "password123") == True
...is that it uses hardcoded credentials, which is a security risk.

The question mentions "most critical issue" so "All of the above" doesn't seem reasonable.

15 B, D) The following GitHub Copilot security safeguards help mitigate accidental exposure of credentials in generated code? 
  • Copilot respects repository secrets settings and avoids using secret keys in suggestions
  • GitHub Copilot’s content filters prevent suggesting hardcoded credentials
While Copilot has preventive measures, it doesn't automatically detect and remove API keys after they're generated. It tries to prevent suggesting them in the first place.

Copilot does more than just flag patterns; it actively tries to prevent suggesting insecure code.


Additional Notes -
  1. In 2021, OpenAI released the multilingual Codex model, which was built in partnership with GitHub. 
  2. GitHub Copilot launched as a technical preview in June 2021 and became generally available in June 2022 as the world’s first at-scale generative AI coding tool.
  3. Codex contains upwards of 170 billion parameters.
  4. GitHub Copilot gathers context from:
    1. Code after cursor
    2. File name and
    3. Other open tabs in the editor
  5. GitHub Copilot API, the GitHub Copilot LLMs are hosted in GitHub-owned Azure tenants. These LLMs consist of AI models created by OpenAI that have been trained on natural language text and source code from publicly available sources, including code in public repositories on GitHub.
  6. OpenAI's GPT-4 model adds support in GitHub Copilot for AI-powered tags in pull-request descriptions through a GitHub app that organization admins and individual repository owners can install. GitHub Copilot automatically fills out these tags based on the changed code. Developers can then review or modify the suggested descriptions.
  7. GitHub Copilot has different offerings organizations and developers - Free, Pro, Pro+, Business, and Enterprise plans. All offerings include code completion and chat assistance, but they differ in terms of license management, policy management, and IP indemnity, as well as how data may be used or collected.
  8. Both GitHub Copilot Business & GitHub Copilot Enterprise have IP indemnity and enterprise-grade security, safety, and privacy.
  9. GitHub Copilot Enterprise can index an organization's codebase for a deeper understanding and for suggestions that are more tailored. It offers access to GitHub Copilot customization to fine-tune private models for code completion.
  10. GitHub Copilot Free users are limited to 2000 completions and 50 chat requests (including Copilot Edits).
  11. GitHub Copilot Autofix provides contextual explanations and code suggestions to help developers fix vulnerabilities in code, and is included in GitHub Advanced Security and available to all public repositories.
  12. GitHub Copilot X aims to bring AI beyond the IDE to more components of the overall platform, such as docs and pull requests.
  13. GitHub Copilot is available as an extension for VS Code, Visual Studio, Vim/Neovim, JetBrains suite of IDEs, Azure Data Studio, Xcode.
  14. Although code completion functionality is available across all these extensions, chat functionality is currently available only in Visual Studio Code, JetBrains and Visual Studio. 
  15. GitHub Copilot is also supported in terminals through GitHub CLI and as a chat integration in Windows Terminal Canary. 
  16. With the GitHub Copilot Enterprise plan, GitHub Copilot is natively integrated into GitHub.com. 
  17. All plans are supported in GitHub Copilot in GitHub Mobile. 
  18. GitHub Mobile for Copilot Pro and Copilot Business have access to Bing and public repository code search. 
  19. Copilot Enterprise in GitHub Mobile gives you additional access to your organization's knowledge.
  20. Languages with less representation in public repositories may be more challenging for Copilot Chat to provide assistance with
  21. In Copilot Chat, if a particular request is no longer helpful context, delete that request from the conversation. Alternatively, if none of the context of a particular conversation is helpful, start a new conversation.
  22. GitHub Copilot transmits data to GitHub’s Azure tenant to generate suggestions, including both contextual data about the code and file being edited (“prompts”) and data about the user’s actions (“user engagement data”). The transmitted data is encrypted both in transit and at rest; Copilot-related data is encrypted in transit using transport layer security (TLS), and for any data we retain at rest using Microsoft Azure’s data encryption (FIPS Publication 140-2 standards).
  23. Copyright law permits the use of copyrighted works to train AI models. GitHub Copilot’s AI model was trained with the use of code from GitHub’s public repositories - which are publicly accessible and within the scope of permissible copyright use.
  24. GitHub Copilot users should align their use of Copilot with their respective risk tolerances. It is your responsibility to assess what is appropriate for the situation and implement appropriate safeguards. 
  25. GitHub does not claim ownership of a suggestion.
  26. As suggestions are generated and before they are returned to the user, Copilot applies an AI-based vulnerability prevention system that blocks insecure coding patterns in real-time to make Copilot suggestions more secure. Our model targets the most common vulnerable coding patterns, including hardcoded credentials, SQL injections, and path injections. The system leverages LLMs to approximate the behavior of static analysis tools and can even detect vulnerable patterns in incomplete fragments of code. This means insecure coding patterns can be quickly blocked and replaced by alternative suggestions. The best way to build secure software is through a secure software development lifecycle (SDLC). GitHub offers solutions to assist with other aspects of security throughout the SDLC, including code scanning (SAST), secret scanning, and dependency management (SCA). GitHub recommends enabling features like branch protection to ensure that code is merged into your codebase only after it has passed your required tests and peer review.
  27. As with any code that your developers did not originate, the decision about when, how much, and in what context to use any code is one your organization needs to make based on its policies, and in consultation with industry and legal service providers. All organizations should maintain appropriate policies and procedures to ensure that these licensing concerns are properly addressed.
  28. GitHub Copilot Extensions are a type of GitHub App that integrates the power of external tools into GitHub Copilot Chat. For example, Sentry is an application monitoring software with a Copilot Extension. Copilot Extensions can be developed by anyone, for private or public use, and can be shared with others through the GitHub Marketplace. 
  29. GitHub Copilot employs advanced learning techniques such as zero-shot, one-shot, and few-shot learning to adapt to different scenarios. This allows it to generate relevant code based on varying levels of context, from completely new tasks to tasks similar to ones it has seen before.
  30. The key difference between GitHub Copilot Business and Enterprise is that the Enterprise version allows organizations to utilize their own codebase to further train and personalize Copilot. This results in more tailored and relevant suggestions based on the organization's specific coding practices and standards.
  31. Slash commands are shortcuts to quickly solve common development tasks within the chat or inline pane.
  32. VPN Proxy support via self-signed certificates is a feature exclusive to GitHub Copilot for Business. This limitation in the individual version helps differentiate the offerings and cater to different user needs, with the business version providing more advanced networking capabilities.
  33. GitHub Copilot Chat can analyze the code context and generate relevant inline comments based on natural language prompts or specific commands (like '/doc'). This feature helps developers quickly add meaningful documentation to their code, improving its readability and maintainability.
  34. When GitHub Copilot generates multiple suggestions, it allows developers to cycle through them using left and right arrows. This feature gives developers control over which suggestion to use, enabling them to choose the most appropriate option for their specific needs.
  35. Context and intent are crucial for GitHub Copilot Chat to provide relevant and accurate assistance. By specifying the scope and goal, developers can guide Copilot to focus on the specific area of the codebase and the desired outcome, resulting in more targeted and useful suggestions.
  36. GitHub Copilot uses advanced neural networks and language models, not traditional statistical analysis or pattern recognition.
  37. Clarity, specificity, and surrounding context are key principles of effective prompt engineering, while verbosity can overwhelm the model.
  38. Prompts are user inputs, including comments, partially written code, or natural language instructions, guiding Copilot in generating relevant outputs.
  39. You can accept GitHub Copilot's suggestions by pressing the Tab key.
  40. GitHub Copilot Enterprise aligns suggestions with the organization's specific standards, leveraging internal knowledge for better productivity and collaboration.
  41. Using implicit prompts with slash commands in inline chat for fixing code issues with GitHub Copilot helps get better responses without writing longer prompts, simplifying interactions.
  42. To improve the performance of GitHub Copilot Chat limit prompts to coding questions or tasks to enhance output quality.
  43.  One way GitHub Copilot automates routine coding tasks for developers is by generating boilerplate code for common functionalities, such as setting up a REST API.
  44. Content exclusion can be configured by adding custom keywords or phrases to the exclusion list within the organization's settings.
  45. GitHub Copilot Chat assists in the process of creating unit tests by generating relevant code snippets, suggesting input parameters and expected outputs, and helping identify potential edge cases. This comprehensive approach helps developers create more thorough and effective unit tests.
  46. Copilot is designed to generate only basic test cases unless explicitly guided.
  47. The Arrange-Act-Assert pattern is a common structure for organizing unit tests. It helps create clear and maintainable tests by separating the test setup, the action being tested, and the verification of the results.
  48. Generating assertions for function input parameters helps prevent invalid data from being processed by the function. It is not primarily done to check the function's output.
  49. The Test Explorer in Visual Studio is used to run and debug unit tests, view test results, and manage test cases in the workspace.
  50. GitHub Copilot logs are transmitted automatically when required, especially in scenarios where troubleshooting or reporting issues is necessary, even without explicit user uploading.
  51. The primary purpose of content filtering in GitHub Copilot is to ensure safety and security by preventing the generation of potentially harmful or malicious code.
  52. Readability, complexity, modularity, reusability, testability, extensibility, reliability, performance, security, scalability, usability, and portability contribute to overall code quality.
  53. Code reliability refers to the probability of a software system functioning without failure under specified conditions for a specified period of time.
  54. Identifying potential issues in the code can help prevent bugs and errors from occurring, which improves code reliability.
  55. GitHub Copilot can use your code and custom instructions to code the way you prefer. 
  56. The 4S Principles of prompt engineering:
    • Single: Always focus your prompt on a single, well-defined task or question. This clarity is crucial for eliciting accurate and useful responses from Copilot.
    • Specific: Ensure that your instructions are explicit and detailed. Specificity leads to more applicable and precise code suggestions.
    • Short: While being specific, keep prompts concise and to the point. This balance ensures clarity without overloading Copilot or complicating the interaction.
    • Surround: Utilize descriptive filenames and keep related files open. This provides Copilot with rich context, leading to more tailored code suggestions.
  57. Fine-tuning is essential to adapt LLMs for specific tasks, enhancing their performance. Traditional full fine-tuning means to train all parts of a neural network, which can be slow and heavily reliant on resources. GitHub uses the LoRA (Low-Rank Adaptation) fine tuning method. LoRA involves adding small trainable parts each layer of the pretrained model without changing the entire structure, optimizing its performance for specific tasks.
  58. Secret scanning is a security feature that helps detect and prevent the accidental inclusion of sensitive information such as API keys, passwords, tokens, and other secrets in your repository. When enabled, secret scanning scans commits in repositories for known types of secrets and alerts repository administrators upon detection. Secret scanning is available for Public repositories (for free) as well as Private and internal repositories in organizations using GitHub Enterprise Cloud with GitHub Advanced Security enabled.
  59. During secret scanning, Copilot looks for patterns and heuristics that match known types of secrets.
  60.  GitHub Copilot Chat proposes fixes for bugs by suggesting code snippets and solutions based on the context of the error or issue.
  61. GitHub Copilot Chat does not compare your code with a database of known bug patterns, but it suggests possible fixes based on the context of the error or issue.
  62. Code refactoring is a process that enhances readability, simplifies complexity, increases modularity, and improves reusability, thereby creating a more manageable and maintainable codebase.
  63. The content exclusion feature ensures that GitHub Copilot doesn't inadvertently suggest sensitive or proprietary code.
  64. Context exclusions can be configured at the repository and organization level
  65. You can provide GitHub Copilot with context to generate tailored responses for your repository by creating a file named .github/copilot-instructions.md in the repository
  66. You can exclude specific files from GitHub Copilot browsing to the repository settings on GitHub and adding the paths to exclude
  67. GitHub Copilot can use semantic information from a file that is ignored by GitHub Copilot content exclusions if the information is provided by the IDE indirectly.
  68. Seat usage for GitHub Copilot at the enterprise level during a billing cycle = Number of seats × (Days elapsed / Total days in billing cycle)
  69. GitHub Copilot Editor configuration file is a Markdown file with natural language instructions for customizing Copilot Chat responses
  70. When you accept a code completion suggestion that matches code in a public GitHub repository, information about the matching code is logged. The log entry includes the URLs of files containing matching code, and the name of the license that applies to that code, if any was found. 
  71. Copilot code referencing searches for matches by taking the code suggestion, plus some of the code that will surround the suggestion if it is accepted and comparing it against an index of all public repositories on GitHub.com. Code in private GitHub repositories, or code outside of GitHub, is not included in the search process. The search index is refreshed every few months. 
  72. Chat participants aren't available in edit mode, so you can't specify @workspace as part of your prompt. However, you can add context to the chat session using #codebase and by dragging and dropping files or folders from Visual Studio Code's EXPLORER view into the Chat view. 
  73. You can use the agent mode to scaffold a test project, create test files, run tests, generate test reports, or perform other tasks related to unit testing. The agent mode is best for creating unit tests that require a more in-depth understanding of the project.
  74. The ask mode in Chat view is optimized for asking questions about your code projects, coding topics, and general technology concepts. The edit mode is optimized for making edits across multiple files in your codebase. The agent mode is optimized for starting an agentic coding workflow.
  75. Implicit prompts help get better responses from GitHub Copilot without writing longer prompts, making it easier to interact and fix code issues.
  76. The principle of Fairness in Responsible Al emphasizes ensuring Al systems perform equally well across all demographic groups.
  77. Microsoft address potential biases in Al systems by reviewing training data, testing with balanced samples, and using adversarial debiasing.
  78. LoRA is a method of fine-tuning Large Language Models (LLMs) that adds trainable elements to each layer of the pretrained model without a complete overhaul.
  79. Docsets are private custom collections of internal code and documentation tailored to organizations' specific needs and workflows. GitHub Copilot Enterprise docset management can help you find the answers you're looking for and present them to you succinctly.
  80. VPN Proxy support via self-signed certificates isn't supported in GitHub Copilot Pro
  81. Copilot Enterprise & Business support zero data retention for code snippets and usage telemetry unlike Copilot Pro 
  82. Copilot uses OpenAI Codex, a new AI system developed by OpenAI, to help derive context from written code and comments, and then suggests new lines or entire functions.
  83. GitHub Codespaces is a hosted developer environment operating in the cloud that can be run with Visual Studio Code. All GitHub accounts can use Codespaces for up to 60 hours free each month with two core instances
  84. To avoid consuming all your monthly GitHub Codespaces time, it's important to delete all your resources after you've uploaded your changes to your repository. 
  85. An API acts as the intermediary that allows different applications to communicate with each other. 
  86. The Generate Docs smart action is useful when you don't have specific requirements that would require a prompt.
  87. When you need to write a longer prompt, the prompt should be written using several short sentences. Start with an overview that describes your goal and then provide specific details.
  88. Chat participants aren't available in edit & Agent mode, so you can't specify @workspace as part of your prompt. However, you can add context to the chat session using #codebase and by dragging and dropping files or folders from Visual Studio Code's EXPLORER view into the Chat view. 
  89. When using GitHub Copilot, customizing Copilot's output to match organizational standards ensures unit test suggestions are aligned with coding standards.
  90. Agent mode autonomously determines the relevant context and files to edit. In edit mode, you need to specify the context yourself.
  91. You can use the agent mode to start an agentic coding workflow. Copilot is capable of self-healing.  Agent mode evaluates the outcome of the generated edits and might iterate multiple times to resolve intermediate issues.
  92. As Copilot processes your request in agent mode, it streams the suggested code edits directly in the editor.
  93. Use .copilotignore file to exclude sensitive files from GitHub Copilot’s context. .copilotignore only works prospectively (going forward). If sensitive files were already processed by Copilot before being added to .copilotignore, that information may have already influenced Copilot's context.
  94. The .copilotignore file uses a similar pattern matching syntax to .gitignore.
  95. Copilot Chat currently operates with a context window of 4k tokens. GitHub Copilot's context window typically ranges from approximately 200-500 lines of code or up to a few thousand tokens. This limitation can vary depending on the specific implementation and version of Copilot being used.
  96. The GitHub Copilot Business and Enterprise plans include IP indemnity, which provides legal protection against intellectual property claims related to the use of Copilot suggestions. For GitHub to assume legal responsibility, the Matching public code setting must be blocked.
  97. Generate Tests Smart Action - create unit tests for specific code blocks or entire files without writing a prompt
  98. In Visual Studio Code and Visual Studio, content exclusions are not applied when you use the @github chat participant in your question.
  99. GitHub Copilot CLI doesn't retain your prompts, but it keeps your usage analytics. You can configure whether you want GitHub Copilot to keep and use your usage data to improve the product. 
  100. In Visual Studio Code, you can use the Ctrl+Enter shortcut to trigger GitHub Copilot.
Work In Progress...

Comments

Popular posts from this blog

HOW TO add a header or footer to a dynamically generated Word document

Oracle University's Race to Certification 2025