• Home
  • Platform
  • Use Cases
  • Partners
  • Integrations
  • Careers
  • Contact Us
  • More
    • Home
    • Platform
    • Use Cases
    • Partners
    • Integrations
    • Careers
    • Contact Us
  • Home
  • Platform
  • Use Cases
  • Partners
  • Integrations
  • Careers
  • Contact Us

We're Hiring!

Join Our Team

Role: AI Systems Prompt Engineer

Location: Hyderabad


Role Description

This is a full-time on-site role for a AI Systems Prompt Engineer located in Hyderabad.  As a Prompt Engineer, you will be a founding member of our product and engineering team, primarily responsible for optimizing and standardizing our AI's performance across all use cases.


### Key Responsibilities

Prompt Design & Optimization

* Design and Architect Prompts: Develop, write, and refine sophisticated System Prompts, User Prompts, and Few-Shot Examples to extract the highest quality and most consistent results from various models.

* Model Agnostic Strategy:  Strategically test and compare performance across different LLMs for specific tasks, identifying which model (e.g., GPT vs. Gemini) provides the superior output and designing fallback/routing logic for our wrapper.

* Advanced Prompting Techniques: Implement and iterate on advanced techniques such as Chain-of-Thought (CoT), Tree-of-Thought (ToT), Retrieval-Augmented Generation (RAG), and Self-Correction/Critique few shot prompting, zero shot prompting, one shot prompting, ReAct, Graph-of-Thoughts (GoT), cognitive prompting.

*Structured Output Engineering:  Specifically engineer prompts to reliably generate structured data formats (JSON, XML, YAML) for seamless integration into our backend and product logic.


Evaluation & Tooling

* A/B Testing and Evaluation: Design and execute systematic A/B tests to measure prompt performance against key metrics (e.g., accuracy, latency, coherence, helpfulness).

* Quality Assurance (QA): Develop and implement automated evaluation mechanisms to continuously monitor for *hallucinations, bias, security risks (Prompt Injection)*, and output drift across model updates.

*Prompt Library Management:  Build and maintain a version-controlled, documented repository of core prompt templates and system instructions for internal use by the engineering team.

* API Interaction: Work directly with the OpenAI API,  Google Gemini API and other models (via Python/Node.js) to manage parameters like temperature, top-p, and context window optimization.


Collaboration & Product Development

*Cross-functional Partnership: Collaborate closely with Product Managers and UX Designers to translate complex product requirements and user journeys into effective LLM interactions.

*Engineer Support:  Serve as the subject matter expert on LLM capabilities, limitations, and prompt engineering best practices for the entire engineering team.


Required Skills & Qualifications


*Experience: 0 to 2 years of experience in a data science, machine learning, or software engineering role, with focus on prompt engineering for production-level Generative AI applications. 

* LLM Platform Expertise: Experience/knowledge of working directly with the APIs of OpenAI/Azure OpenAI and Google Gemini/Vertex AI

*Programming:  Proficiency in Python and experience using relevant libraries (e.g., requests, LLM SDKs) to manage API calls, pre-process data, and post-process model outputs.

* NLP/ML Fundamentals:  Solid understanding of core NLP concepts, LLM architecture (e.g., Transformers), and the difference between in-context learning (prompting) and fine-tuning.

Analytical Thinking: Exceptional critical thinking and analytical skills to diagnose complex model failures and iterate on prompt designs.

* Communication: Excellent written communication skills, with a mastery of language structure, nuance, and clarity—essential for communicating intent to an AI model.


More than experience, we are looking for individuals who are looking for the opportunity to be part of   AI-First product company and has curious mindset, quick learning skills.


### Bonus Qualifications

* Knowledge of RAG implementation using vector databases/indexing frameworks (e.g., Pinecone, Weaviate, Chroma, LlamaIndex, LangChain).

* Knowledge of fine-tuning or custom model training for domain-specific tasks.

* Familiarity with ethical AI practices, including monitoring and mitigating bias and toxicity in AI outputs.

* Prior experience/knowledge  in a startup environment or working on a quickly evolving product.


###Things to know


* Candidates are expected to work 5 day a week from office (No remote working option available)

* We've structured a significant grant of stock options that we expect will make overall compensation highly attractive and reward you for the company's growth"

Apply Now

Attach Resume
Attachments (0)

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Copyright © 2025 Deep AI OCR - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept