No description
- Python 79.2%
- Gherkin 20.5%
- Shell 0.2%
| .github/workflows | ||
| .idea | ||
| .vscode | ||
| environment | ||
| pyqa_apiv2 | ||
| tests | ||
| tests_resources | ||
| .gitignore | ||
| config.py | ||
| config_txmatching.py | ||
| Dockerfile | ||
| poetry.lock | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| report-render.sh | ||
| settings.py | ||
BINK API 2.0
This is a framework design in Python for the test automation of Bink's APIs. The framework has been designed using the Pytest-BDD plugin to implement the BDD approach. Modules of the framework are designed in such a way that it can be reused by all merchants in any channels This framework will provide a Regression testing suite for all available API endpoints, and also serve for Sanity, Smoke testing & In- Sprint testing for all channels & merchants.
Set Up
This project requires an up-to-date version of Python 3 It uses poetry to manage packages. To set up this project on your local machine:
- Clone the repo from GitHub (git@github.com:binkhq/bink-api-v2-automation-suite.git)
- Execute
poetry shell, from the project's root directory to create the virtual environment - Execute
poetry installto install depedndecies from pyproject.toml - Install Azure CLI and login to Azure for Key Vault access
brew install azure-cliaz login
Executing Tests from Local
-
Test Execution:
- Use
pytestcommand - Use markers '-m' to filter tests by BDD tags
- Pass variables '--env' to set current environment (defaulted to staging)
- The default environment is staging and default channel is bink
- Use
-
A few sample execution commands:
- pytest -m "add" --env staging : Execute Add Journey for all merchants in staging
- pytest -m "add and viator" --env staging : Execute Add Journey for Viator in staging
- pytest -m "add and enrol" : Execute Add & Enrol Journey for all merchants in staging
-
Commands used for nighly regression in bink in staging
- pytest -m "bink_regression_api2.0" --env staging
-
Run Database query from the test scripts:
- Connect to Tailscale
- Set the env variable for DB in the terminal
set HERMES_DATABASE_URI $(kubectl get secret azure-postgres -o json | jq -r .data.url_hermes | base64 --decode)(Env variables: HERMES_DATABASE_URI, HARMONIA_DATABASE_URI, SNOWSTORM_DATABASE_URI) - Execute the tests as usual from local
Executing tests from Kubernetes pods
- Create a new corn job :
kubectl create job --from=cronjob/pyqa-apiv2 <jobname> - Execute the whole suite:
- To run a Check the pod status - A new pod will create and it will be in 'Running' status
- Once all the tests are completed the HTML result will be published in Alerts-QA teams channel
- Execute a subset of tests fron the newly created pod:
- Get the pod name :
kubectl get pods - Get into the pod:
kubectl exec -it <pyqa pod name> -- bash - Idetify the subset of tests need to execute by using the tags in th.feature files
- Execute tests from terminal using
pytest -m "<unique tag name>" --env staging
Scheduled Regression execution
- The whole suite will run based on the cron job 0 20 * * 1-5 ( Monday to Friday at 8pm) If any change needed in the schedule, update the same in gitops (https://github.com/binkhq/gitops/blob/master/overlays/uksouth-staging/olympus/pyqa-apiv2/cronjob.yaml)
- And the results will be published on Alerts-QA teams channel
- The generic tag used for regression execution :_ '@bink_regression_api2'_