Quality Department
Mục lục
Maintained by
On this page
Child Pages
Mission
GitLab’s Quality is everyone’s responsibility. The Quality Department ensures that everyone is aware of what the Quality of the product is, empirically.
In addition, we empower our teams to ship world class enterprise software at scale with quality & velocity.
Our principles
- Foster an environment where Quality is Everyone’s responsibility.
- We enable product teams in baking quality early in the product development flow process.
- We are a sounding-board for our end users by making feedback known to product teams.
- We are a champion of good software design, testing practices and bug prevention strategies.
- Improve test coverage and leverage tests at all levels.
- We work to ensure that the right tests run at the right places.
- We enable product teams’ awareness of their test coverage with fast, clear and actionable reporting.
- We continuously refine test efficiency, refactor duplicate coverage, and increase stability.
- Make Engineering teams efficient, engaged and productive.
- We build automated solutions to improve workflow efficiency and productivity.
- We ensure reliability in our tooling and tests.
- We ensure that continuous integration pipelines are efficient, stable with optimal coverage.
- Metrics driven.
- We provide data driven insights into defects, test stability, efficiency and team execution health.
- We ensure the data is actionable and is available transparently to the company and the wider community.
- We use data to make informative next steps and continously improve with metrics-driven optimizations.
FY23 Direction
In FY23 we will be focused on contributor success & customer results while delivering impact to the company’s bottomline via alignment to top cross-functional initiatives. Key directional highlights; be more customer centric in our work, execute on 10x contributor strategy jointly with Marketing, provide timely operational analytics insights & improve team member engagement. In FY23 we anticipate a large increase of cross-functional activity within the company. Fostering an open-collaborative environment is more important than ever for us to deliver results.
Customer Centric Quality
- Expand & improve upon deploy with confidence work foundation.
- Outreach to understand user’s needs of reference architecture.
- Improve maturity of reference architectures, its components and validation.
- Expand the capability of staging environments according to engineers’ needs.
- Increase customer empathy by participating in activities that highlights their painpoints.
Increase Contributions
- Execute funnel approach strategy with Marketing division.
- Reduce review and merge time of contributions with a focus on per product group analysis.
- Increase MR Coaches prioritizing product groups that are more popular with contributors.
- Identify, recognize and retain regular contributors.
- Setup community teams to foster contributor sense of belonging.
- Increase type & frequency of community awards.
Customer Contributions
- Reach out to large customers to understand their needs for contribution.
- Establish predictable customer realized time-to-value for their contributions.
- Increase MRARR.
- Implement customer contribution recognition.
- Collaborate with Marketing on Customer Contribution Strategy.
Productivity
- Reduce manual burden for SET and EP engineers on-call.
- Reduce time to first failure of GitLab pipelines.
- Reduce duration of GitLab pipelines.
- Maintain high-stability of GitLab master pipeline.
- Increase MR Rate.
Analytics Driven
- Implement key data & metrics for Engineering in a timely fashion.
- Improve format and delivery of Engineering performance indicators.
- Ensure all Engineering teams have real-time awareness of their engineering metrics big-picture.
- Provide work allocation insights between Engineering teams & departments.
- Empower teams to take action on quality in their areas via data.
Team Growth & Engagement
- Provide geo-diverse department activity and presence.
- Provide clear actionable career paths for all job-families.
- Provide learning & growing opportunities for the management team.
- Collaborate with recruitment to ensure timely hiring.
- Grow Contributor Success & Engineering Analytics to their full capacity.
- Every manager understands their team’s MR Rate.
OKRs
Objectives and Key Results (OKRs) help align our department towards what really matters. These happen quarterly and are based on company OKRs and we follow the OKR process defined here(/company/okrs/#how-to-use-gitlab-for-okrs). We check in on the status of our progress routinely throughout the quarter to determine whether we are on track or need to pivot in order to accomplish or change these goals. At the end of the quarter, we do a final scoring which includes a retrospective on how the quarter went according to these OKRs.
Current quarter
FY24 Q1 Quality Department OKR overview
Previous quarter (FY23Q4)
- ARR OKR: Increase Revenue by key Business Results => 94%
- KR: Resolve 30 S1/S2 customer impacting bugs and reference architecture improvements => 100%
- KR: Support Fedramp initiative through and post RAR Audit => 93%
- KR: Support successful completion of fulfillment systems initiatives => 83%
- KR: Create SSOT reporting for cost of hosting-related infrastructure => 100%
- Product OKR: Increase Quality with test efficiency & customer insights => 63%
- KR: Create dashboards for customer impacting S1/S2 bugs by development teams => 50%
- KR: Reduce TtFF P80 from 33 minutes to 20 minutes => 40%
- KR: Improve master stability from 90% to 95% => 60%
- KR: Reduce the package-and-test duration by 30% => 100%
- Product OKR: Increase Community Contributions with recognition, events and Leading Orgs on-boarding => 80%
- KR: Scale the Community through a contributor event, mentoring & onboarding of customers
- KR: Increase Contribution Value by increasing collaboration and increasing recognition
- Team OKR: Hire and grow the team => 59%
- KR: Meet FY23Q4 hiring target
- KR: Complete talent assessment on time with quality
- KR: Increase URG representation by hiring 2 management positions classified as URG => 0%
- Aspirational OKR: Educate team members on being “Best in Class” & identify operational gaps => 86%
- KR: 100% of team members review GitLab competitive analysis => 98%
- KR: Identify 6 test process improvements in areas that are behind competition maturity => 74%
Staffing planning
We staff our department with the following gearing ratios:
Software Engineer in Test
- Primary Ratio: 1 Software Engineer in Test per Product Group.
- This ratio is captured as a department performance indicator.
- We are improving this ratio by factoring additional facets of each product group and not blanket allocating staffing to every product group. These facets include:
- Driver scores (Usage, SMAU, SAM)
- Revenue path (ARR, Uptier)
- Customer adoption journey
- Self-manage & Reference Architecture impact
- Must work areas
- Development and UX facets (number of Engineers, SUS issues)
- For more information, please see the SET Gearing Prioritization Model for more (GitLab Only). With these adjustments, we would be at ~85% of the 1:1 ratio to every product group.
- Product groups with high complexity may need more than one SET.
- Newly formed product groups may not have an allocated SET. They may be allocated one in the future.
- Secondary Ratio: Approximately a 1:8 ratio of Software Engineer in Test to Development Department Engineers.
Engineering Productivity Engineer
- Primary Ratio: 1 Engineering Productivity Engineer per Product Stage.
- This ratio is captured as a department performance indicator.
- Secondary Ratio: Approximately a 1:40 ratio of Engineering Productivity Engineers to Development Department Engineers.
Quality Engineering Manager
- Primary Ratio: 1 Quality Engineering Manager per Product Section.
- This ratio is captured as a department performance indicator.
- Secondary Ratio: Approximately a 1:1 ratio of Quality Engineering Manager to Development Department Directors.
Communication
In addition to GitLab’s communication guidelines and engineering communication, we communicate and collaborate actively across GitLab in the following venues:
Week-in-review
By the end of the week, we populate the Engineering Week-in-Review document with relevant updates from our department. The agenda is internal only, please search in Google Drive for ‘Engineering Week-in-Review’.
Every Monday a reminder is sent to all of engineering in the #eng-week-in-review slack channel to read summarize updates in the google doc.
Engineering-wide retrospective
The Quality team holds an asynchronous retrospective for each release.
The process is automated and notes are captured in Quality retrospectives (GITLAB ONLY)
Engineering Metrics task process
We track work related to Engineering performance indicators in the Engineering Analytics board.
This board is used by the Engineering Analytics team to:
- Summarize status and progress on work related to Engineering’s KPIs and metrics.
- Distinguish between planned projects for the current quarter and ad-hoc requests received by the Engineering Analytics team.
The work effort on Engineering Division and Departments’ KPIs/RPIs is owned by the Engineering Analytics team. This group maintains the Engineering Metrics page.
DRIs
The Engineering Analytics board is structured by the analytics needs within each Engineering Department. At the beginning of each quarter, the team declares and prioritizes projects related to long-standing analytics needs for one or more Engineering Departments. In addition, the team also takes on ad-hoc requests ranging from maintenance of existing KPIs and dashboards to consultation on new metrics and data related to Engineering operations.
The ownership of the work columns follows the stable counterpart assignment of the Engineering Analytics team to each Engineering Department.
In order to engage with the team, please refer to the Engineering Analytics team’s handbook page for the appropriate Slack channels and projects for creating issues for our team.
DRI Responsibilities
- Prepare the board before the data team sync meeting.
- Interface with the Department leader you report to in your 1-1, capture the ask and relative prioritization and populate the board.
- Please work on capturing the ask as issues and reorder them for prioritization.
- Issues not of importance or not currently worked on can be below the cutline.
- In the data team sync meeting, ensure that the data team is aware of dependencies and blockers.
Process
- Create an issue with
~"Engineering Metrics"
to be added to the Engineering Analytics board.- State clearly what are the requirements and measures of the performance indicator.
- The Director of Engineering Analytics is the DRI for triage, prioritization, and assignment.
- If work can be done without the need of new data warehouse capabilities, the DRI will schedule and assign the work within Engineering.
- If new Data warehouse capabilities are needed from the Data team, a linked issue will be created on the Data team’s Engineering board.
- Requests for support from the Data Team will be reviewed during Data Triage or by requesting an expedition
- Every KPI issue is either assigned to the backlog or given a due date. The Engineering team will propose first a due date, which the Results DRI will confirm if possible or the provide the next possible date.
- Discussions to take place in #eng-data-kpi as needed.
- Every new KPI/RPI should follow our standardized format.
- The closure of the issue should be done with a merge request to the performance indicator page(s).
Task management
We have top level boards (at the gitlab-org
level) to communicate what is being worked on for all teams in quality engineering.
Each board has a cut-line on every column that is owned by an individual. Tasks can be moved vertically to be above or below the cut-line.
The cut-line is used to determine team member capacity, it is assigned to the Backlog
milestone. The board itself pulls from any milestone
as a catch-all so we have insights into past, current and future milestones.
The cut-line also serves as a healthy discussion between engineers and their manager in their 1:1s. Every task on the board should be sized according to our weight definitions.
How to use the board and cut-line
- Items above the cut-line are issues in-progress and have current priority.
- Items under the cut-line are not being actively worked on.
- Engineers should self-update content in their column, in addition to being aware of their overall assignments before coming to their 1:1s
- Managers should be aware of their overall team assignments. Please review your boards and refine them frequently according to the department goals and business needs.
- Highlight blockers and tasks that are under in weight. Consider adjusting the weights to communicate the challenges/roadblocks broadly. Use
~"workflow::blocked"
to indicate a blocked issue. - Weight adjustments are a healthy discussion. Sometimes an issue maybe overweight or underweight, this calibration should be an continuous process. Nothing is perfect, we take learnings as feedback to future improvements.
- We aim to have roughly 15 weights assigned to any person at a given time to ensure that engineers are not overloaded and prevent burnout. The number may change due to on-boarding period and etc.
Discussion on the intent and how to use the board
Team boards
The boards serve as a single pane of glass view for each team and help in communicating the overall status broadly, transparently and asynchronously.
Quality Department on-call rotations
Pipeline triage
Every member in the Quality Department shares the responsibility of analyzing the daily QA tests against master
and staging
branches.
More details can be seen here
Incident management
Every manager and director in the Quality Department shares the responsibility of monitoring new and existing incidents
and responding or mitigating as appropriate. Incidents may require review of test coverage, test planning, or updated
procedures, as examples of follow-up work which should be tracked by the DRI.
The Quality Department has a rotation for incident management. The rotation can be seen here.
Please note: Though there is a rotation for DRI, any manager or director within Quality can step in to help in an
urgent situation if the primary DRI is not available. Don’t hesitate to reach out in the Slack channel
#quality-managers
.
Refinement processes
Below mentioned are few venues of collaboration with Development department.
Bug Refinement
To mitigate high priority issues like performance bugs and transient bugs, Quality Engineering will triage and refine those issues for Product Management and Development via a bi-weekly Bug Refinement process.
Goals
- To make the performance of various aspects of our application empirical with tests, environments, and metrics.
- To minimise the transient bugs seen in our application, thereby improving usability.
Identifying Issues
Quality Engineering will do the following in order to identify the issues to be highlighted in the refinement meeting:
- Review existing customer impacting performance bugs in our issue tracker and add the ~”bug::performance” label.
- Review issues raised due to failures in the daily performance tests and idenntify early warning on performance degradation which have not had customer exposure but poses a risk in the future. Apply the ~”bug::performance” label for these issues as well.
- Review all issues labelled as ~”bug::transient”.
Process
- A manager in the Quality Engineering department will lead refinement with issues populated beforehand in the issue boards.
- The performance refinement board is used to triage performance issues.
- The transient bugs board is used to triage transient issues.
- Before each meeting, for issues that are not yet fully triaged, the QEM meeting lead will assign the QEM of the appropriate stage or group to prioritize them.
- The QEM meeting lead should review the board for long running issues that do not have meaningful activity and add them to the agenda to be considered for closure if no longer actionable.
- Any high impact issues which need wider awareness should be added to the agenda for discussion by the relevant stakeholder. This includes urgent performance/transient issues as well as those that have been surfaced as important for customers.
- These issues that are surfaced to the refinement meeting will be severitized and priorotized according to our definitions.
- Guest attendees who may be relevant for a topic on the agenda (product group engineering managers or product managers, technical account managers, support engineers, or others) should be added to the calendar invite.
Development request issues
Quality Engineering will track productivity, metric and process automation improvement work items
in the Development-Quality board to service the Development department.
Requirements and requests are to be created with the label ~dev-quality
. The head of both departments will review and refine the board on an on-going basis.
Issues will be assigned and worked on by an Engineer in the Engineering Productivity team team and communicated broadly when each work item is completed.
Release process overview
Moved to release documentation.
Security Questionnaires
The Quality department collaborates with the Security department’s compliance team to handle requests from customers and prospects.
The Risk and Field Security team maintains the current state of answers to these questions, please follow the process to request completion of assessment questionnaire.
If additional input is needed from the Quality team, the DRI for this is the Director of Quality. Tracking of supplemental requests will be via a confidential issue in the compliance issue tracker. Once the additional inputs have been supplied, this is stored in the Compliance team’s domain for efficiency.
Department recurring event DRIs
Recurring event
Primary DRI
Backup DRI
Cadence
Format
Quality Key Review
@meks
@nick_vh
@vincywilson
Every 8 weeks
Review meeting
Group conversation
@meks
@at.ramya
@vincywilson
@nick_vh
@jo_shih
@vincywilson
Every 8 weeks
Group Conversations
GitLab SaaS Infrastructure Weekly
Rotates between @jo_shih
@vincywilson
@vincywilson
Weekly
Incident review and corrective action tracking
Incident management
Rotates between @jo_shih
, @at.ramya
, and @vincywilson
All managers
Weekly
Incident monitoring, response, and management as needed to represent Quality
Self-managed environment triage
@vincywilson
@vincywilson
Every 2 weeks
Sync stand-up
Bug refinement
Rotates between @at.ramya
@jo_shih
@vincywilson
Weekly
Review meeting
Security Vulnerability review
@meks
TBD
Every 4 weeks
Review meeting
Quality Department Staff Meeting
@meks
TBD
Weekly
Review meeting
Quality Department Bi-Weekly
Department management team
@meks
Every 2 weeks
Review meeting
Quality Department Social Call
All team members
All team members
Every 2 weeks
Meet and Greet
Quality Hiring Bi-Weekly
All QEMs, Directors, and VP
TBD
Every 2 weeks
Review meeting
Ops section stakeholder review
@jo_shih
@dcroft
@zeffmorgan
Every 4 weeks
Review meeting
Enablement Sync with AppSec
@vincywilson
TBD
Monthly
Review meeting
Quality Engineering initiatives
Triage Efficiency
Due to the volume of issues, one team cannot handle the triage process.
We have invented Triage Reports to scale the triage process within Engineering horizontally.
More on our Triage Operations
Test Automation Framework
The GitLab test automation framework is distributed across two projects:
- GitLab QA, the test orchestration tool.
- The scenarios and spec files within the GitLab codebase under
/qa
in GitLab.
Installation and execution
- Install and set up the GitLab Development Kit
- Install and run GitLab QA to kick off test execution.
- The spec files (test cases) can be found in the GitLab codebase
Test results tracking
- Within a test spec, each test example is associated with 1 Gitlab testcase.
RSpec
.
describe
'Stage'
do
describe
'General description of the feature under test'
do
it
'test name'
,
testcase:
'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/:test_case_id'
do
...
end
it
'another test'
,
testcase:
'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/:another_test_case_id'
do
...
end
end
end
- Each Gitlab testcase would contain information regarding the test example: file path, example name, example description, and historical records of the associated testcase issue.
- A testcase issue is an issue that lives within quality/testcase project. This issue can either be created by the author of the test or by Gitlab QA Bot, and should have very similar description content with the testcase it’s associated with. This is where we keep track of test results from all environments that the test is executed in.
- When the testcase issue reaches its number of comments limit, a new testcase issue will be automatically created in the next test execution with the same content and added to the testcase historical records. The old test case issue is closed.
Documentation and videos
Performance and Scalability
The Quality Department is committed to ensuring that self-managed customers have performant and scalable configurations.
To that end, we are focused on creating a variety of tested and certified Reference Architectures. Additionally, we
have developed the GitLab Performance Tool, which provides several tools for measuring the performance of any GitLab
instance. We use the Tool every day to monitor for potential performance degradations, and this tool can also be used
by GitLab customers to directly test their on-premise instances. More information is available on our
Performance and Scalability page.
MRARR
The Quality department is the DRI for MRARR tooling and tracking. MRARR is an important part of the Open Core 3 year strategy to increase contributions from the Wider community.
Customer contributor tracking
Customer contributors are currently tracked in a Google Sheet that is imported to Sisense every day. Data has been sourced from Bitergia and reviewing previous Wider community contributions.
Customer contributor additions
Additions have been identified through the following means and added to the source above once confirmed by a Manager in the Quality Department.
- Indication from a member of the Sales team
- Contributor is linked to a Salesforce contact
- Confirmation with other public sources
- Identifying the organization, commit email or other public user information on the merge request.
- Validating that contributor is associated with a customer organization by using other public sources such as LinkedIn.
- Verify the organization is a paying customer of GitLab in using Salesforce.com to open the Account and look at the CARR fields.
After verifying a contributor is associated with a customer, these steps are how to add a new contributor to the tracking sheet
- Check if the customer organization is already defined in the spreadsheet by the Salesforce Account ID. If not, add a new row with the following information:
- Salesforce Account name for the Contributor Organization (a)
- Full 18 character Salesforce Account ID for the SFDC Account ID column (c). This can be retrieved from converting the 15 character ID with a gem like
salesforce_id_formatter
- Add the Contributor’s GitLab.com username to the Contributor Usernames column (b). The format of this column is a JSON array. Please use doublequoted strings and commas after each username.
Diagnostic dashboard
The MRARR Diagnostics dashboard contains some helpful supplemental charts to understand changes in MRARR and untracked contributors.