QA Automation Lead (Performance Testing) - India
Juniper Square
Quality Assurance
India
Location
India
Employment Type
Full time
Location Type
Remote
Department
Engineering
About Juniper Square
Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology – all while contributing as a member of a values-driven organization – we want to hear from you.
Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time.
About your role
As a QA Automation Lead for Performance testing at Juniper Square, you will define the performance engineering strategy for the organization. You will ensure our systems can handle hyper-growth and peak traffic events through proactive modeling, automated gating, and deep architectural analysis.
What you’ll do
Review functional and non-functional requirements, technical design documents, and provide meaningful feedback to identify performance risks early.
Design, develop, and execute performance, load, and stress tests for web and backend systems.
Use AI to analyze production traffic patterns and automatically generate representative performance scripts in Locust or JMeter that mirror real-world user behavior
Build and maintain performance test scripts using Python, primarily leveraging Locust (tool-agnostic mindset, but Locust experience is a strong plus)
Collaborate closely with development, QA, DevOps, and SRE teams to define performance benchmarks, SLAs, and acceptance criteria.
Analyze test results to identify bottlenecks related to application code, APIs, databases, infrastructure, or third-party dependencies.
Produce clear and actionable performance test reports, highlighting trends, risks, and recommendations for optimization.
Integrate performance tests into CI/CD pipelines and support continuous performance testing practices.
Monitor application performance during releases and contribute to capacity planning and scalability discussions.
Lead performance engineering best practices and help shift performance testing left in the SDLC.
Qualifications
Education: Bachelor's degree in Computer Science, or equivalent professional experience.
Experience: 7-10 years in Software Quality Assurance, with at least 5 years focused on performance, load, stress, and endurance testing.
Performance Testing: Strong hands-on experience designing and executing performance test strategies for web applications and APIs with an ability to read architectural diagrams and identify potential single points of failure.
Programming Skills: Strong proficiency in Python, with the ability to write clean, maintainable, and scalable test code.
Tools and Systems: Experience with performance testing tools such as Locust (preferred), JMeter, Gatling, or similar
Metrics & Analysis: Solid understanding of performance metrics (response time, throughput, latency, error rates, resource utilization) and profiling techniques.
APIs & Backend: Hands-on experience testing REST APIs and backend services under load.
CI/CD : Experience designing and owning the Performance Gate in the CI/CD pipeline, ensuring automated performance regressions are caught before reaching production.
Observability & Profiling: Advanced skills in using APM tools (e.g., Datadog, New Relic, or Dynatrace) and profiling tools to pinpoint code-level bottlenecks, memory leaks, and thread contention.
Data Strategy: Experience managing large-scale, sanitized test data sets required for high-volume performance execution without skewing cache results.
Infrastructure & Cloud: Deep experience with AWS infrastructure (EC2, Lambda, RDS, ELB), and containerization (Docker, Kubernetes)
Test Process: Experience in performance test plans, scenarios, workload models, and test data strategies.
Soft Skills: Excellent analytical and problem-solving abilities, attention to detail, and the ability to work independently within Agile development teams.
Communication: Clear written and verbal communication skills, with the ability to explain performance findings to both technical and non-technical stakeholders.
AI Qualifications
Experience designing evaluation frameworks for LLM-powered features, including prompt regression testing and behavioral drift detection
Proactively leverage AI tools (e.g., Cursor, Gemini) to accelerate test authoring, debugging, and maintenance of automation frameworks
Use AI to diagnose failures, generate test scenarios, and improve coverage and efficiency
Contribute to testing strategies for AI-powered features, including validation of LLM outputs, edge cases, and reliability
Drive best practices for the ethical and effective use of AI tools within QA workflows and across the broader engineering team
At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications.