Prompt Engineering in the Python Assistant


Prompt Engineering in the Python Assistant

When writing code with the MicroStation Python Assistant, effective prompting is crucial to achieving desired outcomes and maximizing productivity. The quality of your input directly influences the quality and relevance of the output provided by the assistant.

Here's a comprehensive guide to prompting when using the MicroStation Python Assistant:

Understanding the MicroStation Python Assistant The MicroStation Python Assistant is a generative artificial intelligence (AI) assistant developed by Bentley Systems. The Python Assistant specifically understands MicroStation Python to generate scripts, simplifying complex coding concepts and automating mundane tasks, making software development more accessible, even to non-experts. It functions by predicting the most likely continuation of the text you provide.

1. Core Principles of Effective Prompting Adopting clear communication principles is key. Think of the assistant as a very literal-minded junior partner.

Be Specific and Clear

    ◦ Vague requests will lead to vague or incorrect output. Always state exactly what you want it to do and what you don't want it to do.

    ◦ Use precise language and avoid ambiguity; quantify requests whenever possible. For instance, instead of "Make this app better," try "Refactor the app to clean up unused components and improve performance, without changing UI or functionality". If asking for a function, specify what it should calculate, what inputs it takes, and what it should return.

    ◦ A more detailed description generally leads to more accurate results.

Provide Comprehensive Context and Details The assistant has no common sense or implicit context beyond what you give it. The more relevant information you provide, the better the results will be.

    ◦ Share Documentation: If you're working with a specific code from another project, or documentation you may have found online,  copy and paste relevant sections of documentation directly into your prompt, and explicitly state, "Based on this piece of documentation, can you create this thing?". This helps the AI absorb new information even if it wasn't in its training material.

    ◦ Use Existing Code Snippets: If you have existing code, share it to guide the assistant and maintain consistency. You can also implement a conceptual idea yourself in code and then direct the assistant to follow that pattern.

    ◦ Structured Prompts: Organize prompts with explicit sections like "Context," "Task," "Guidelines," "Constraints," and "Examples". This helps the model better understand what you want and forces you to think through your requirements. Place crucial details at the beginning and reiterate absolute requirements at the end.

    ◦ Context Window Limitations: The MicroStation Python Assistant, like other large language models, uses the live chat history for context. If a conversation becomes too long (e.g., beyond 10 prompts as observed in some systems), the assistant might forget earlier details, leading to "bloated" context and degraded performance. If you find yourself in an unproductive loop or the assistant is consistently generating bad code, start a new chat session with a fresh, detailed prompt.

Break Down Tasks (Iterate and Refine) Avoid trying to achieve a large, complex task in a single prompt.

    ◦ Define Small Goals: Break down the work into smaller, manageable steps. Prompt the assistant to build one small piece of functionality at a time.

    ◦ Iterate Step-by-Step: After the assistant completes a step, test it thoroughly. If it works, save your progress. If not, provide specific feedback to the assistant to refine its output. This iterative process, often called a "vibe coding loop," involves thinking, prompting, testing, debugging, and repeating.

    ◦ Build in Checkpoints/Version Control: Save versions of your scripts so you have the ability to go back to previous states if a change introduces problems.

Planning Your Project with the Assistant You can leverage the assistant for project planning, much like a product manager.

    ◦ "Vibe PMing": Ask the assistant to help create a project outline, including requirements. This is where the AI writes the specification with your feedback.

    ◦ Co-develop a Product Requirements Document (PRD): For complex tasks, you can define a PRD collaboratively with the assistant, specifying project goals, key features, and user experience guidelines. The AI can even help generate tasks from a PRD.

    ◦ Ask for a Plan First: For complex tasks, ask the assistant to outline its plan, assumptions, and potential risks before it starts generating code. You can also ask it for the "simplest version" of the plan.

    ◦ Detailed Product Requirements Document (PRD): You can define a PRD collaboratively with the assistant, specifying project goals, key features, and user experience guidelines. This ensures a clear vision before building. AI can also help generate tasks from a PRD.

3. Advanced Prompting Techniques

Define the Persona: Tell the assistant what role to embody (e.g., "You're a CAD Modeller with little programming knowledge"). This sets the stage and helps the agent understand the problem domain and its expertise.

Positive vs. Negative Instructions: Instruct what you want to happen rather than what you don't want. For example, instead of "don't make the padding small," say "please add more padding".

Asking for Thought Process (Chain of Thought): Ask the assistant to explain its reasoning step-by-step. This helps you understand its logic and can improve your prompts. For example, "Explain your thought process" or "Think as long as you need and ask me questions if you need more info".

Self-Consistency Prompting: Instruct the assistant to test its own code or suggested solutions against examples you provide, and then verify that it works correctly. You can also ask it to "Give me 10 answers and choose the best one," prompting it to evaluate multiple approaches.

Meta Prompting:

Reverse Meta Prompting: After resolving an issue, ask the assistant to summarize what went wrong, how it was fixed, and then draft a reusable prompt for similar future challenges. This helps build a library of effective prompts.

Custom Rules and Guardrails: Define rules in each prompt to control the assistant's behavior. These are like natural language configurations that can include coding style, preferred technologies, or common pitfalls to avoid. Examples include "Always prefer simple solutions," "Avoid duplication of code," or "Only make requested changes".

Zero-Shot vs. Few-Shot Prompting:

    ◦ Zero-shot: Ask the model to perform a task with no examples (e.g., "Write a Python function to calculate the factorial of a given number"). This is efficient for common or clearly described tasks.

    ◦ Few-shot: Provide a few examples of desired input-output pairs to guide the AI towards a specific style or format. This consumes more tokens but can yield more consistent results, especially for complex outputs like test cases.

Managing Hallucinations and Ensuring Accuracy: AI models can confidently invent incorrect information or code. To mitigate this:

    ◦ Provide Grounding Data: Supply reliable context (e.g., PRD, user flows, etc).

    ◦ In-Prompt References: Include relevant snippets or data for factual queries.

    ◦ Instruct Honesty: Include a guideline like "If you are not sure of a fact or the correct code, do not fabricate it – instead, explain what would be needed or ask for clarification".

    ◦ Iterative Verification: After the AI gives an answer, verify it.

4. Debugging and Troubleshooting Bugs are an inevitable part of coding, even with AI assistance. The MicroStation Python Assistant can help with debugging and troubleshooting.

Provide Detailed Error Information: When an error occurs, copy the exact error message and relevant code snippets into your prompt. Describe what you were trying to do and what you've already attempted.

Work Iteratively on Fixes: Copy errors back into the chat and work with the assistant to resolve them. Some tools may offer a "Fix all" or "Try to fix it for me" button.

Be Assertive: If the assistant is stuck or keeps returning incorrect solutions, you might need to be "brutal" and tell it to find a different approach or to admit if it cannot solve the problem.

The Python Assistant Terminal: For web-based outputs, use browser developer tools (console) to check for errors in localhost and paste them back to the AI.