Job Details

Senior Software Development Engineer in Test (Senior SDET) - Remote

  2026-01-07     Luxury Presence     all cities,AK  
Description:

Senior Software Development Engineer in Test (Senior SDET) - Remote

Luxury Presence is the leading digital platform revolutionizing the real estate industry for agents, teams, and brokerages. Our award-winning websites, cutting-edge marketing solutions, and AI-powered mobile platform empower real estate professionals to grow their business, operate more efficiently, and deliver exceptional service to their clients. Trusted by over 80,000 real estate professionals, including 31 of the nation's 100 top-performing agents as published in the Wall Street Journal, Luxury Presence continues to set the standard for innovation and excellence in real estate technology.

As a Senior Software Development Engineer in Test (SDET), you will lead the design and evolution of Luxury Presence's next-generation test automation strategy that merges traditional test engineering with AI-augmented quality intelligence.

You'll own the architecture and development of scalable test frameworks, introduce AI-enabled test generation and validation, and build custom Model Context Protocol (MCP) tools to supercharge Playwright testing. You'll collaborate with engineering and AI teams to test AI agents, LLM-integrated features, and AI-driven customer experiences.

Key Responsibilities
  • Architect and evolve end-to-end automated testing frameworks, with Playwright as the core automation technology.
  • Lead the integration of AI-enabled testing methodologies into the SDLC, including predictive test selection, LLM-based test case generation, and self-healing automation.
  • Design, build, and maintain custom MCP (Model Context Protocol) servers and tools that enhance Playwright's context-awareness and adaptability.
  • Develop test strategies for AI agents and AI-integrated product features, ensuring accuracy, reliability, and safety.
  • Partner with AI/ML engineers to define validation frameworks for AI model performance, prompt correctness, and deterministic output behavior.
  • Mentor SDETs and QA engineers on advanced automation, AI-assisted testing, and test strategy best practices.
  • Define quality metrics and feedback loops for intelligent CI/CD pipelines, enabling faster, safer releases.
  • Collaborate cross-functionally with Product, DevOps, and Architecture to embed testing standards and ensure high quality across all delivery stages.
  • Stay ahead of emerging AI testing frameworks and champion innovative approaches to improve testing scalability and precision.
What You'll Bring
  • 7+ years of experience in software testing, test automation, or software engineering, including 2+ years in a technical leadership capacity.
  • Proven expertise in Playwright (or similar tools like Cypress, Selenium, or WebDriverIO) and TypeScript/JavaScript programming.
  • Demonstrated success integrating AI systems or tools into QA workflows, including prompt engineering, test generation, or defect analysis using AI models.
  • Hands-on experience testing AI-driven solutions, LLM-based systems, or AI agents.
  • Experience designing and maintaining test automation frameworks for scalability, reliability, and observability.
  • Deep understanding of CI/CD pipelines (GitHub Actions, Jenkins, GitLab, CircleCI) and cloud-native environments (AWS, Kubernetes).
  • Familiarity with GraphQL, Apollo, React, and microservice-based architectures.
  • Strong mentorship, communication, and collaboration skills.
  • Passion for innovation, particularly in AI, automation, and developer productivity.
Technology Stack
  • Frontend: React, GraphQL, Apollo
  • Backend: Node.js, TypeScript, Microservices
  • Database: Postgres, Redis, DynamoDB
  • Cloud: AWS, Kubernetes, Lambda
  • Testing: Playwright, Jest, Custom MCP tools
  • CI/CD: GitHub Actions, CircleCI, Jenkins, ArgoCD
  • AI & Automation: AI agents for QA, AI-assisted test generation, model validation frameworks, prompt-based testing utilities
Nice to Haves
  • Experience developing or testing AI agents (autonomous or semi-autonomous assistants).
  • Knowledge of LangChain, LlamaIndex, or other orchestration frameworks for testing LLM applications.
  • Experience with AI model testing (data drift, regression analysis, hallucination detection).
  • Familiarity with model observability and quality metrics for LLM-based products.
  • Exposure to mobile testing (React Native, Appium, Maestro).
  • Experience leading QA or automation teams or owning large-scale test infrastructure.

$120,000 - $150,000 a year


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search