Package gcip

gcip API reference

project structure and terminology of artifacts

To keep this source code folder as clean as possible, all code files are sorted into one of these folders:

  • core
  • lib
  • tools
  • addons

The core folder contains, as the name implies, all the core components that represent Gitlab CI objects in Python. You need to know that all class names from all Python modules within the core folder are mapped to the gcip root module. This is done within the __init__.py of the gcip folder. Instead of import gcip.core.job.Job you should import gcip.Job. You should import all classes of the core folder the same way.

Always remember:

# Dos:
from gcip import Pipeline, Job, Sequence  # ... and so on

pipeline = Pipeline()
# Dont's
from gcip.core import pipeline, job

pipeline = pipeline.Pipeline()

The lib folder contains all higher level objects which are derived from the core objects. For example: gcip.Rule from gcip.core.rule is the general Gitlab CI Rule representation, whereas core.rules contains some convenient predefined Rule instances like on_main() or on_tags().

The tools folder contains all code which is used by the library code but does not represent any Gitlab CI specific functionality. This directory also contains scripts which could be run on their own and are supposed to be called by Gitlab CI jobs during the pipeline execution. For example gcip.tools.url.is_valid_url(str) which, as the name implies, checks if str is a valid url.

The addons folder also contains code which extends the core components in form of higher level objects that provide functionality for a specific use case. A use case could be python, ruby, cloudformation, ansible et cetera. Every subdirectory of addons has the name of such a use case. The name addons is chosen by the intention that in future the subdirectories will be outsourced into separate projects. This could be the case when the core library is stable enough to not hinder the development of the downstream addons projects and the addons were too many to be maintained within the core library. However at this point the project is small enough to provide the core and add on functionality in an easy to use all-in-one package.

We also use a following naming conventions throughout the library:

  • Files called _job_scripts.py hold functions that return strings, which could be used as command within Gitlab CI jobs.
  • Directories called tools hold Python scripts which could be called by Gitlab CI jobs during the pipeline execution. They will be called directly from the Gitlab CI Python library, e.g. python3 -m gcip.path.to.script.

Sub-modules

gcip.addons
gcip.core

The core module contains all direct Gitlab CI keyword representations.

gcip.lib
gcip.tools

Classes

class Artifacts (*paths: str, excludes: List[str] = [], expire_in: Optional[str] = None, expose_as: Optional[str] = None, name: Optional[str] = None, public: Optional[bool] = None, reports: Dict[ArtifactsReport, str] = {}, untracked: Optional[bool] = None, when: Optional[WhenStatement] = None)

This class represents the artifacts keyword.

Gitlab CI documentation: _"Use artifacts to specify a list of files and directories that are attached to the Job when it succeeds, fails, or always. […] by default, Jobs in later stages automatically download all the artifacts created by jobs in earlier stages. You can control artifact download behavior in jobs with dependencies.

Args

paths : str
Paths relative to project directory $CI_PROJECT_DIR, found files will be used to create the artifacts.
excludes : List[str], optional
Paths that prevent files from being added to an artifacts archive. Defaults to [].
expire_in : Optional[str], optional
How long the artifacts will be saved before it gets deleted. Defaults to None.
expose_as : Optional[str], optional
Used to expose artifacts in merge requests. Defaults to None.
name : Optional[str], optional
Name of the artifacts archive. Internally defaults to {PredefinedVariables.CI_JOB_NAME}-{PredefinedVariables.CI_COMMIT_REF_SLUG}.
public : Optional[bool], optional
True makes artifacts public. Defaults to None.
reports : Dict[ArtifactsReport, str]
Reports must be a valid dictionary, the key represents a ArtifactsReport and the value must be a valid relativ file path to the reports file. Defaults to {}.
untracked : Optional[bool], optional
If true adds all untracked file to artifacts archive. Defaults to None.
when : Optional[WhenStatement], optional
When to upload artifacts, Only on_success, on_failure or always is allowed. Defaults to None.

Raises

ValueError
If when not one of WhenStatement.ALWAYS, WhenStatement.ON_FAILURE or WhenStatement.ON_SUCCESS.
Expand source code
class Artifacts:
    def __init__(
        self,
        *paths: str,
        excludes: List[str] = [],
        expire_in: Optional[str] = None,
        expose_as: Optional[str] = None,
        name: Optional[str] = None,
        public: Optional[bool] = None,
        reports: Dict[ArtifactsReport, str] = {},
        untracked: Optional[bool] = None,
        when: Optional[WhenStatement] = None,
    ) -> None:
        """
        This class represents the [artifacts](https://docs.gitlab.com/ee/ci/yaml/#artifacts) keyword.

        Gitlab CI documentation: _"Use artifacts to specify a list of files and directories that are
            attached to the `gcip.core.job.Job` when it succeeds, fails, or always.
        [...] by default, `gcip.core.job.Job`s in later stages automatically download all the artifacts created
            by jobs in earlier stages. You can control artifact download behavior in jobs with dependencies.


        Args:
            paths (str): Paths relative to project directory `$CI_PROJECT_DIR`,
                found files will be used to create the artifacts.
            excludes (List[str], optional): Paths that prevent files from being added to an artifacts archive. Defaults to [].
            expire_in (Optional[str], optional): How long the artifacts will be saved before it gets deleted. Defaults to None.
            expose_as (Optional[str], optional): Used to expose artifacts in merge requests. Defaults to None.
            name (Optional[str], optional): Name of the artifacts archive.
                Internally defaults to {PredefinedVariables.CI_JOB_NAME}-{PredefinedVariables.CI_COMMIT_REF_SLUG}.
            public (Optional[bool], optional): True makes artifacts public. Defaults to None.
            reports (Dict[ArtifactsReport, str]): Reports must be a valid dictionary, the key represents a ArtifactsReport and the
                value must be a valid relativ file path to the reports file. Defaults to {}.
            untracked (Optional[bool], optional): If true adds all untracked file to artifacts archive. Defaults to None.
            when (Optional[WhenStatement], optional): When to upload artifacts, Only `on_success`, `on_failure` or `always` is allowed. Defaults to None.

        Raises:
            ValueError: If `when` not one of `WhenStatement.ALWAYS`, `WhenStatement.ON_FAILURE` or `WhenStatement.ON_SUCCESS`.
        """
        self._paths: OrderedSetType = dict.fromkeys(
            [self._sanitize_path(path) for path in paths]
        )
        self._excludes: OrderedSetType = dict.fromkeys(
            [self._sanitize_path(exclude) for exclude in excludes]
        )
        self._expire_in = expire_in
        self._expose_as = expose_as
        self._name = (
            name
            if name
            else f"{PredefinedVariables.CI_JOB_NAME}-{PredefinedVariables.CI_COMMIT_REF_SLUG}"
        )
        self._public = public
        self._reports = {k.value: self._sanitize_path(v) for k, v in reports.items()}
        self._untracked = untracked
        self._when = when

        if self._when and self._when not in [
            WhenStatement.ALWAYS,
            WhenStatement.ON_FAILURE,
            WhenStatement.ON_SUCCESS,
        ]:
            raise ValueError(
                f"{self._when} not allowed. Only possible values are `on_success`, `on_failure` or `always`"
            )

    @staticmethod
    def _sanitize_path(path: str) -> str:
        """Sanitizes the given path.

        Uses `os.path.normpath()` to normalize path.
        Shorten `PredefinedVariables.CI_PROJECT_DIR` at the very beginning of the path to just '.'.

        Args:
            path (str): Path to get sanitized.

        Raises:
            ValueError: If path begins with `/` and is not `PredefinedVariables.CI_PROJECT_DIR`.

        Returns:
            str: Sanitized path.
        """
        _path = os.path.normpath(path)
        if _path.startswith(PredefinedVariables.CI_PROJECT_DIR):
            _path = _path.replace(PredefinedVariables.CI_PROJECT_DIR, ".")

        if _path.startswith("/"):
            raise ValueError(
                f"Path {_path} not relative to {PredefinedVariables.CI_PROJECT_DIR}."
            )
        return _path

    @property
    def paths(self) -> List[str]:
        """Equals the identical Class argument."""
        return list(self._paths.keys())

    def add_paths(self, *paths: str) -> Artifacts:
        self._paths.update(dict.fromkeys([self._sanitize_path(path) for path in paths]))
        return self

    @property
    def excludes(self) -> List[str]:
        """Equals the identical Class argument."""
        return list(self._excludes)

    def add_excludes(self, *excludes: str) -> Artifacts:
        self._excludes.update(
            dict.fromkeys([self._sanitize_path(exclude) for exclude in excludes])
        )
        return self

    @property
    def expire_in(self) -> Optional[str]:
        """Equals the identical Class argument."""
        return self._expire_in

    @expire_in.setter
    def expire_in(self, expire_in: str) -> Artifacts:
        self._expire_in = expire_in
        return self

    @property
    def expose_as(self) -> Optional[str]:
        """Equals the identical Class argument."""
        return self._expose_as

    @expose_as.setter
    def expose_as(self, expose_as: str) -> Artifacts:
        self._expose_as = expose_as
        return self

    @property
    def name(self) -> str:
        """Equals the identical Class argument."""
        return self._name

    @name.setter
    def name(self, name: str) -> Artifacts:
        self._name = name
        return self

    @property
    def public(self) -> Optional[bool]:
        """Equals the identical Class argument."""
        return self._public

    @public.setter
    def public(self, public: bool) -> Artifacts:
        self._public = public
        return self

    @property
    def reports(self) -> Dict[str, str]:
        """Equals the identical Class argument."""
        return self._reports

    @reports.setter
    def reports(self, reports: Dict[str, str]) -> Artifacts:
        self._reports = reports
        return self

    def add_reports(self, reports: Dict[ArtifactsReport, str]) -> Artifacts:
        self._reports.update({k.value: v for k, v in reports.items()})
        return self

    @property
    def untracked(self) -> Optional[bool]:
        """Equals the identical Class argument."""
        return self._untracked

    @untracked.setter
    def untracked(self, untracked: bool) -> Artifacts:
        self._untracked = untracked
        return self

    @property
    def when(self) -> Optional[WhenStatement]:
        """Equals the identical Class argument."""
        return self._when

    @when.setter
    def when(self, when: WhenStatement) -> Artifacts:
        self._when = when
        return self

    def render(
        self,
    ) -> Optional[
        Dict[
            str, Union[str, bool, List[str], Dict[str, str], Dict[ArtifactsReport, str]]
        ]
    ]:
        """Return a representation of this Artifacts object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Union[str, bool, List[str], Dict[str, str], Dict[ArtifactReport, str]]]: A dictionary representing the
                artifacts object in Gitlab CI.
        """
        if not self._paths and not self._reports:
            return None

        rendered: Dict[
            str, Union[str, bool, List[str], Dict[str, str], Dict[ArtifactsReport, str]]
        ]
        rendered = {
            "name": self.name,
        }
        if self.paths:
            rendered["paths"] = list(self.paths)
        if self.excludes:
            rendered["excludes"] = list(self.excludes)
        if self.expire_in:
            rendered["expire_in"] = self.expire_in
        if self.expose_as:
            rendered["expose_as"] = self.expose_as
        if self.public is not None:
            rendered["public"] = self.public
        if self.reports:
            rendered["reports"] = self.reports
        if self.untracked is not None:
            rendered["untracked"] = self.untracked
        if self.when:
            rendered["when"] = self.when.value
        return rendered

    def _equals(self, artifact: Optional[Artifacts]) -> bool:
        """
        Returns:
            bool: True if self equals to `artifact`.
        """
        if not artifact:
            return False

        return self.render() == artifact.render()

Instance variables

prop excludes : List[str]

Equals the identical Class argument.

Expand source code
@property
def excludes(self) -> List[str]:
    """Equals the identical Class argument."""
    return list(self._excludes)
prop expire_in : Optional[str]

Equals the identical Class argument.

Expand source code
@property
def expire_in(self) -> Optional[str]:
    """Equals the identical Class argument."""
    return self._expire_in
prop expose_as : Optional[str]

Equals the identical Class argument.

Expand source code
@property
def expose_as(self) -> Optional[str]:
    """Equals the identical Class argument."""
    return self._expose_as
prop name : str

Equals the identical Class argument.

Expand source code
@property
def name(self) -> str:
    """Equals the identical Class argument."""
    return self._name
prop paths : List[str]

Equals the identical Class argument.

Expand source code
@property
def paths(self) -> List[str]:
    """Equals the identical Class argument."""
    return list(self._paths.keys())
prop public : Optional[bool]

Equals the identical Class argument.

Expand source code
@property
def public(self) -> Optional[bool]:
    """Equals the identical Class argument."""
    return self._public
prop reports : Dict[str, str]

Equals the identical Class argument.

Expand source code
@property
def reports(self) -> Dict[str, str]:
    """Equals the identical Class argument."""
    return self._reports
prop untracked : Optional[bool]

Equals the identical Class argument.

Expand source code
@property
def untracked(self) -> Optional[bool]:
    """Equals the identical Class argument."""
    return self._untracked
prop when : Optional[WhenStatement]

Equals the identical Class argument.

Expand source code
@property
def when(self) -> Optional[WhenStatement]:
    """Equals the identical Class argument."""
    return self._when

Methods

def add_excludes(self, *excludes: str) ‑> Artifacts
def add_paths(self, *paths: str) ‑> Artifacts
def add_reports(self, reports: Dict[ArtifactsReport, str]) ‑> Artifacts
def render(self) ‑> Optional[Dict[str, Union[str, bool, List[str], Dict[str, str], Dict[ArtifactsReport, str]]]]

Return a representation of this Artifacts object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Union[str, bool, List[str], Dict[str, str], Dict[ArtifactReport, str]]]
A dictionary representing the artifacts object in Gitlab CI.
class ArtifactsReport (*args, **kwds)

This class represents the artifacts:reports types.

Expand source code
class ArtifactsReport(Enum):
    """This class represents the [artifacts:reports](https://docs.gitlab.com/ee/ci/yaml/#artifactsreports) types."""

    API_FUZZING = "api_fuzzing"
    """The api_fuzzing report collects API Fuzzing bugs as artifacts."""

    COBERTURA = "cobertura"
    """The cobertura report collects Cobertura coverage XML files."""

    CODEQUALITY = "codequality"
    """The codequality report collects Code Quality issues as artifacts."""

    CONTAINER_SCANNING = "container_scanning"
    """The container_scanning report collects Container Scanning vulnerabilities as artifacts."""

    COVERAGE_FUZZING = "coverage_fuzzing"
    """The coverage_fuzzing report collects coverage fuzzing bugs as artifacts."""

    DAST = "dast"
    """The dast report collects DAST vulnerabilities as artifacts."""

    DEPENDENCY_SCANNING = "dependency_scanning"
    """The dependency_scanning report collects Dependency Scanning vulnerabilities as artifacts."""

    DOTENV = "dotenv"
    """The dotenv report collects a set of environment variables as artifacts."""

    JUNIT = "junit"
    """The junit report collects JUnit report format XML files as artifacts."""

    LICENSE_SCANNING = "license_scanning"
    """The license_scanning report collects Licenses as artifacts."""

    LOAD_PERFORMANCE = "load_performance"
    """The load_performance report collects Load Performance Testing metrics as artifacts."""

    METRICS = "metrics"
    """The metrics report collects Metrics as artifacts."""

    PERFORMANCE = "performance"
    """The performance report collects Browser Performance Testing metrics as artifacts."""

    REQUIREMENTS = "requirements"
    """The requirements report collects requirements.json files as artifacts."""

    SAST = "sast"
    """The sast report collects SAST vulnerabilities as artifacts."""

    SECRET_DETECTION = "secret_detection"
    """The secret-detection report collects detected secrets as artifacts."""

    TERRAFORM = "terraform"
    """The terraform report obtains a Terraform tfplan.json file."""

Ancestors

  • enum.Enum

Class variables

var API_FUZZING

The api_fuzzing report collects API Fuzzing bugs as artifacts.

var COBERTURA

The cobertura report collects Cobertura coverage XML files.

var CODEQUALITY

The codequality report collects Code Quality issues as artifacts.

var CONTAINER_SCANNING

The container_scanning report collects Container Scanning vulnerabilities as artifacts.

var COVERAGE_FUZZING

The coverage_fuzzing report collects coverage fuzzing bugs as artifacts.

var DAST

The dast report collects DAST vulnerabilities as artifacts.

var DEPENDENCY_SCANNING

The dependency_scanning report collects Dependency Scanning vulnerabilities as artifacts.

var DOTENV

The dotenv report collects a set of environment variables as artifacts.

var JUNIT

The junit report collects JUnit report format XML files as artifacts.

var LICENSE_SCANNING

The license_scanning report collects Licenses as artifacts.

var LOAD_PERFORMANCE

The load_performance report collects Load Performance Testing metrics as artifacts.

var METRICS

The metrics report collects Metrics as artifacts.

var PERFORMANCE

The performance report collects Browser Performance Testing metrics as artifacts.

var REQUIREMENTS

The requirements report collects requirements.json files as artifacts.

var SAST

The sast report collects SAST vulnerabilities as artifacts.

var SECRET_DETECTION

The secret-detection report collects detected secrets as artifacts.

var TERRAFORM

The terraform report obtains a Terraform tfplan.json file.

class Cache (paths: List[str], cache_key: Optional[CacheKey] = None, untracked: Optional[bool] = None, when: Optional[WhenStatement] = None, policy: Optional[CachePolicy] = None)

This class represents the cache keyword.

Gitlab CI documentation: "Use cache to specify a list of files and directories to cache between Jobs. […] Caching is shared between Pipelines and Jobs. Caches are restored before artifacts."

Args

paths : str
Use the paths directive to choose which files or directories to cache. Could be one or more path strings.
cache_key : Optional[CacheKey]
The key keyword defines the affinity of caching between jobs. Defaults to CacheKey with default arguments.
untracked : Optional[bool]
Set the untracked keyword to True to cache all files that are untracked in your Git repository. Defaults to None (unset).
when : Optional[WhenStatement]
This keyword defines when to save the cache, depending on job status. Possible values are gcip.core.rule.WhenStatement.ON_SUCCESS, gcip.core.rule.WhenStatement.ON_FAILURE, gcip.core.rule.WhenStatement.ALWAYS. Defaults to None (unset).
policy : Optional[CachePolicy]
The CachePolicy determines if a Job can modify the cache or read him only. Defaults to None (unset).

Raises

ValueError
For unsupported values for the when parameter.
Expand source code
class Cache:
    """This class represents the [cache](https://docs.gitlab.com/ee/ci/yaml/#cache) keyword.

    Gitlab CI documentation: _"Use cache to specify a list of files and directories to cache between `gcip.core.job.Job`s.
    [...] Caching is shared between `gcip.core.pipeline.Pipeline`s and `gcip.core.job.Job`s. Caches are restored before artifacts."_

    Args:
        paths (str): Use the [paths directive](https://docs.gitlab.com/ee/ci/yaml/#cachepaths) to choose which
            files or directories to cache. Could be one or more path strings.
        cache_key (Optional[CacheKey]): The key keyword defines the affinity of caching between jobs.
            Defaults to `CacheKey` with default arguments.
        untracked (Optional[bool]): Set the [untracked keyword](https://docs.gitlab.com/ee/ci/yaml/#cacheuntracked) to `True` to cache
            all files that are untracked in your Git repository. Defaults to None (unset).
        when (Optional[WhenStatement]): [This keyword](https://docs.gitlab.com/ee/ci/yaml/#cachewhen) defines when to save the cache,
            depending on job status. Possible values are `gcip.core.rule.WhenStatement.ON_SUCCESS`,
            `gcip.core.rule.WhenStatement.ON_FAILURE`, `gcip.core.rule.WhenStatement.ALWAYS`. Defaults to None (unset).
        policy (Optional[CachePolicy]): The `CachePolicy` determines if a Job can modify the cache or read him only.
            Defaults to None (unset).

    Raises:
        ValueError: For unsupported values for the `when` parameter.
    """

    def __init__(
        self,
        paths: List[str],
        cache_key: Optional[CacheKey] = None,
        untracked: Optional[bool] = None,
        when: Optional[WhenStatement] = None,
        policy: Optional[CachePolicy] = None,
    ) -> None:
        self._paths = []
        self._untracked = untracked
        self._when = when
        self._policy = policy

        # Remove project path prefix from paths given.
        # Prepend ./ to path to clearify that cache paths
        # are relative to CI_PROJECT_PATH
        for path in paths:
            if path.startswith(PredefinedVariables.CI_PROJECT_DIR):
                path = path[len(PredefinedVariables.CI_PROJECT_DIR) :]

            if not path.startswith("./"):
                path = "./" + path
            self._paths.append(path)

        # Get default CacheKey = PredefinedVariables.CI_COMMIT_REF_SLUG
        if cache_key:
            self._cache_key = cache_key
        else:
            self._cache_key = CacheKey()

        allowed_when_statements = [
            WhenStatement.ON_SUCCESS,
            WhenStatement.ON_FAILURE,
            WhenStatement.ALWAYS,
        ]
        if self._when and self._when not in allowed_when_statements:
            raise ValueError(
                f"{self._when} is not allowed. Allowed when statements: {allowed_when_statements}"
            )

    @property
    def paths(self) -> List[str]:
        """Equals the identical Class argument."""
        return self._paths

    @property
    def cache_key(self) -> CacheKey:
        """Equals the identical Class argument."""
        return self._cache_key

    @property
    def untracked(self) -> Optional[bool]:
        """Equals the identical Class argument."""
        return self._untracked

    @property
    def when(self) -> Optional[WhenStatement]:
        """Equals the identical Class argument."""
        return self._when

    @property
    def policy(self) -> Optional[CachePolicy]:
        """Equals the identical Class argument."""
        return self._policy

    def render(self) -> Dict[str, Any]:
        """Return a representation of this Cache object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Any]: A dictionary prepresenting the cache object in Gitlab CI.
        """
        rendered: Dict[
            str,
            Union[str, bool, List[str], Union[str, Dict[str, Union[List[str], str]]]],
        ]
        rendered = {"paths": self._paths}
        if self._when:
            rendered["when"] = self._when.value
        if self._untracked is not None:
            rendered["untracked"] = self._untracked
        if self._policy:
            rendered["policy"] = self._policy.value
        rendered["key"] = self._cache_key.render()

        return rendered

    def _equals(self, cache: Optional[Cache]) -> bool:
        """
        Returns:
            bool: True if self equals to `cache`.
        """
        if not cache:
            return False
        return self.render() == cache.render()

Instance variables

prop cache_keyCacheKey

Equals the identical Class argument.

Expand source code
@property
def cache_key(self) -> CacheKey:
    """Equals the identical Class argument."""
    return self._cache_key
prop paths : List[str]

Equals the identical Class argument.

Expand source code
@property
def paths(self) -> List[str]:
    """Equals the identical Class argument."""
    return self._paths
prop policy : Optional[CachePolicy]

Equals the identical Class argument.

Expand source code
@property
def policy(self) -> Optional[CachePolicy]:
    """Equals the identical Class argument."""
    return self._policy
prop untracked : Optional[bool]

Equals the identical Class argument.

Expand source code
@property
def untracked(self) -> Optional[bool]:
    """Equals the identical Class argument."""
    return self._untracked
prop when : Optional[WhenStatement]

Equals the identical Class argument.

Expand source code
@property
def when(self) -> Optional[WhenStatement]:
    """Equals the identical Class argument."""
    return self._when

Methods

def render(self) ‑> Dict[str, Any]

Return a representation of this Cache object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Any]
A dictionary prepresenting the cache object in Gitlab CI.
class CacheKey (key: Optional[str] = None, *, files: Optional[List[str]] = None, prefix: Optional[str] = None)

This class represents the cache:key keyword.

Gitlab CI documentation: "The key keyword defines the affinity of caching between jobs. You can have a single cache for all jobs, cache per-job, cache per-branch, or any other way that fits your workflow."

Args

key : Optional[str]
The key is the unique id of the cache. Jobs referencing caches with the same key are sharing the cache contents. Mutually exclusive with files. Defaults to PredefinedVariables.CI_COMMIT_REF_SLUG if neither key nor files is set.
files : Optional[list]
A set of files is another way to define a caches unique id. Jobs referencing caches with the same set of files are sharing the cache contents. The cache:key:files keyword extends the cache:key functionality by making it easier to reuse some caches, and rebuild them less often, which speeds up subsequent pipeline runs. Mutually exclusive with keys. Defaults to None.
prefix : Optional[str]
Prefix prefixed given files to allow creation of caches for branches. Defaults to None.

Raises

ValueError
If both key and files are provided.
ValueError
If both key and prefix are provided.
ValueError
If prefix but not files is provided.
ValueError
If key is only made out of dots '.'.
Expand source code
class CacheKey:
    """This class represents the [cache:key](https://docs.gitlab.com/ee/ci/yaml/#cachekey) keyword.

    Gitlab CI documentation: _"The key keyword defines the affinity of caching between jobs. You can have a single cache for
    all jobs, cache per-job, cache per-branch, or any other way that fits your workflow."_

    Args:
        key (Optional[str]): The key is the unique id of the cache. `gcip.core.job.Job`s referencing caches with the same key are
            sharing the cache contents. Mutually exclusive with `files`. Defaults to
            `gcip.core.variables.PredefinedVariables.CI_COMMIT_REF_SLUG` if neither `key` nor `files` is set.
        files (Optional[list]): A set of files is another way to define a caches unique id. Jobs referencing caches with the same
            set of files are sharing the cache contents. The [cache:key:files](https://docs.gitlab.com/ee/ci/yaml/#cachekeyfiles) keyword
            extends the cache:key functionality by making it easier to reuse some caches, and rebuild them less often, which speeds up
            subsequent pipeline runs. Mutually exclusive with `keys`. Defaults to None.
        prefix (Optional[str]): Prefix prefixed given `files` to allow creation of caches for branches. Defaults to None.

    Raises:
        ValueError: If both `key` and `files` are provided.
        ValueError: If both `key` and `prefix` are provided.
        ValueError: If `prefix` but not `files` is provided.
        ValueError: If `key` is only made out of dots '.'.
    """

    def __init__(
        self,
        key: Optional[str] = None,
        *,
        files: Optional[List[str]] = None,
        prefix: Optional[str] = None,
    ) -> None:
        self._key = key
        self._files = files
        self._prefix = prefix

        if self._key and self._files:
            raise ValueError("Parameters key and files are mutually exclusive.")
        elif self._prefix and not self._files:
            raise ValueError(
                "Parameter 'prefix' can only be used together with 'files'."
            )

        if self._files is None and self._key is None:
            self._key = PredefinedVariables.CI_COMMIT_REF_SLUG

        if self._key:
            # Forward slash and dot not allowed for cache key,
            # therefore replacing both by '_' and '-'.
            self._key = self._key.replace("/", "_").replace(".", "-")

    @property
    def key(self) -> Optional[str]:
        """Equals the identical Class argument."""
        return self._key

    @property
    def files(self) -> Optional[List[str]]:
        """Equals the identical Class argument."""
        return self._files

    @property
    def prefix(self) -> Optional[str]:
        """Equals the identical Class argument."""
        return self._prefix

    def render(self) -> Union[str, Dict[str, Union[List[str], str]]]:
        """Return a representation of this cache key object as string or dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Union[str, Dict[str, Union[List[str], str]]]: A string or dictionary prepresenting the cache object in Gitlab CI.
        """
        rendered: Union[str, Dict[str, Union[List[str], str]]]
        if self._key:
            rendered = self._key
        else:
            rendered = {}
            if self._files:
                rendered["files"] = self._files
            if self._prefix:
                rendered["prefix"] = self._prefix
        return rendered

Instance variables

prop files : Optional[List[str]]

Equals the identical Class argument.

Expand source code
@property
def files(self) -> Optional[List[str]]:
    """Equals the identical Class argument."""
    return self._files
prop key : Optional[str]

Equals the identical Class argument.

Expand source code
@property
def key(self) -> Optional[str]:
    """Equals the identical Class argument."""
    return self._key
prop prefix : Optional[str]

Equals the identical Class argument.

Expand source code
@property
def prefix(self) -> Optional[str]:
    """Equals the identical Class argument."""
    return self._prefix

Methods

def render(self) ‑> Union[str, Dict[str, Union[str, List[str]]]]

Return a representation of this cache key object as string or dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Union[str, Dict[str, Union[List[str], str]]]
A string or dictionary prepresenting the cache object in Gitlab CI.
class CachePolicy (*args, **kwds)

This class represents the cache:policy keyword.

The policy determines if a Job can modify the cache or read him only.

Expand source code
class CachePolicy(Enum):
    """This class represents the [cache:policy](https://docs.gitlab.com/ee/ci/yaml/#cachepolicy) keyword.

    The policy determines if a Job can modify the cache or read him only.
    """

    PULL_PUSH = "pull-push"
    """
    The default behavior of a caching job is to download the files at the start of execution, and to
    re-upload them at the end. Any changes made by the job are persisted for future runs.
    """

    PULL = "pull"
    """
    If you know the job does not alter the cached files, you can skip the upload step by setting this policy in the job specification.
    """

Ancestors

  • enum.Enum

Class variables

var PULL

If you know the job does not alter the cached files, you can skip the upload step by setting this policy in the job specification.

var PULL_PUSH

The default behavior of a caching job is to download the files at the start of execution, and to re-upload them at the end. Any changes made by the job are persisted for future runs.

class Environment (name: str, url: Optional[str] = None)

This module represents the Gitlab CI Environment keyword.

Use Environment to specify an environment to use for the Job.

Args

name : str
The name of the environment the job deploys to.
url : Optional[str]
A single URL.
Expand source code
@dataclass
class Environment:
    """This module represents the Gitlab CI [Environment](https://docs.gitlab.com/ee/ci/yaml/#environment) keyword.

    Use `Environment` to specify an environment to use for the `gcip.core.job.Job`.

    Args:
        name (str): The name of the environment the job deploys to.
        url (Optional[str]): A single URL.
    """

    name: str
    url: Optional[str] = None

    def with_url(self, url: str) -> Environment:
        """
        Returns a copy of that environment with altered url.
        You can still use the original Environment object with its original url.
        """
        copy = deepcopy(self)
        copy.url = url
        return copy

    def render(self) -> Dict[str, Union[str, List[str]]]:
        """Return a representation of this Environment object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Union[str, List[str]]]: A dictionary pre-presenting the environment object in Gitlab CI.
        """
        rendered: Dict[str, Union[str, List[str]]] = {}

        rendered["name"] = self.name

        if self.url:
            rendered["url"] = self.url

        return rendered

    def _equals(self, environment: Optional[Environment]) -> bool:
        """
        Returns:
            bool: True if self equals to `environment`.
        """
        if not environment:
            return False

        return self.render() == environment.render()

Class variables

var name : str
var url : Optional[str]

Methods

def render(self) ‑> Dict[str, Union[str, List[str]]]

Return a representation of this Environment object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Union[str, List[str]]]
A dictionary pre-presenting the environment object in Gitlab CI.
def with_url(self, url: str) ‑> Environment

Returns a copy of that environment with altered url. You can still use the original Environment object with its original url.

class Image (name: str, tag: Optional[str] = None, entrypoint: Optional[List[str]] = None)

This module represents the Gitlab CI Image keyword.

Use Image to specify a Docker image to use for the Job.

Objects of this class are not meant to be altered. This is because Image objects are typically be defined at a central place and often re-used. Altering the object at one place may lead to unpredictable changes at any reference to that object. That is this class has no setter methods. However you can use the .with_tag() and .with_entrypoint() methods on an Image object, which will return an altered copy of that image. Thus you can re-use a centrally maintained Image object and modify it for just the place you are using the altered image (copy).

Args

name : str
The fully qualified image name. Could include repository and tag as usual.
tag : Optional[str]
Container image tag in registrie to use.
entrypoint : Optional[List[str]]
Overwrites the containers entrypoint. Defaults to None.
Expand source code
@dataclass
class Image:
    """This module represents the Gitlab CI [Image](https://docs.gitlab.com/ee/ci/yaml/#image) keyword.

    Use `Image` to specify a Docker image to use for the `gcip.core.job.Job`.

    Objects of this class are not meant to be altered. This is because Image objects are typically be defined
    at a central place and often re-used. Altering the object at one place may lead to unpredictable changes
    at any reference to that object. That is this class has no setter methods. However you can use  the
    `.with_tag()` and `.with_entrypoint()` methods on an Image object, which will return an altered copy
    of that image. Thus you can re-use a centrally maintained Image object and modify it for just the
    place you are using the altered image (copy).

    Args:
        name (str): The fully qualified image name. Could include repository and tag as usual.
        tag (Optional[str]): Container image tag in registrie to use.
        entrypoint (Optional[List[str]]): Overwrites the containers entrypoint. Defaults to None.
    """

    name: str
    tag: Optional[str] = None
    entrypoint: Optional[List[str]] = None

    def with_tag(self, tag: str) -> Image:
        """
        Returns a copy of that image with altered tag.
        You can still use the original Image object with its original tag.
        """
        copy = deepcopy(self)
        copy.tag = tag
        return copy

    def with_entrypoint(self, *entrypoint: str) -> Image:
        """
        Returns a copy of that image with altered entrypoint.
        You can still use the original Image object with its original entrypoint.
        """
        copy = deepcopy(self)
        copy.entrypoint = list(entrypoint)
        return copy

    def render(self) -> Dict[str, Union[str, List[str]]]:
        """Return a representation of this Image object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Union[str, List[str]]]: A dictionary prepresenting the image object in Gitlab CI.
        """
        rendered: Dict[str, Union[str, List[str]]] = {}

        rendered["name"] = self.name + (f":{self.tag}" if self.tag else "")

        if self.entrypoint:
            rendered["entrypoint"] = self.entrypoint

        return rendered

    def _equals(self, image: Optional[Image]) -> bool:
        """
        Returns:
            bool: True if self equals to `image`.
        """
        if not image:
            return False

        return self.render() == image.render()

Class variables

var entrypoint : Optional[List[str]]
var name : str
var tag : Optional[str]

Methods

def render(self) ‑> Dict[str, Union[str, List[str]]]

Return a representation of this Image object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Union[str, List[str]]]
A dictionary prepresenting the image object in Gitlab CI.
def with_entrypoint(self, *entrypoint: str) ‑> Image

Returns a copy of that image with altered entrypoint. You can still use the original Image object with its original entrypoint.

def with_tag(self, tag: str) ‑> Image

Returns a copy of that image with altered tag. You can still use the original Image object with its original tag.

class IncludeArtifact (job: str, artifact: str)

A special type of include: Use a TriggerJob with IncludeArtifact to run a child pipeline with a generated configuration file from a previous job:

Args

job : str
Job name to include the artifact from.
artifact : str
Relative path to the artifact which is produced by job.
Expand source code
class IncludeArtifact(Include):
    """A special type of include: Use a `gcip.core.job.TriggerJob` with `IncludeArtifact` to run [a child pipeline with a generated configuration
    file from a previous job](https://docs.gitlab.com/ee/ci/yaml/#trigger-child-pipeline-with-generated-configuration-file):

    Args:
        job (str): Job name to include the artifact from.
        artifact (str): Relative path to the artifact which is produced by `job`.
    """

    def __init__(self, job: str, artifact: str) -> None:
        self._rendered_include = {"job": job, "artifact": artifact}

Ancestors

Inherited members

class IncludeFile (file: str, project: str, ref: Optional[str] = None)

This module represents the Gitlab CI include:file keyword.

Args

file : str
Relative path to the file to include.
project : str
Project to include the file from.
ref : Optional[str], optional
Project branch to include the file from. Defaults to None.
Expand source code
class IncludeFile(Include):
    """This module represents the Gitlab CI [include:file](https://docs.gitlab.com/ee/ci/yaml/#includefile) keyword.

    Args:
        file (str): Relative path to the file to include.
        project (str): Project to include the file from.
        ref (Optional[str], optional): Project branch to include the file from. Defaults to None.
    """

    def __init__(
        self,
        file: str,
        project: str,
        ref: Optional[str] = None,
    ) -> None:
        self._rendered_include = {"file": file, "project": project}
        if ref:
            self._rendered_include["ref"] = ref

Ancestors

Inherited members

class IncludeLocal (local: str)

This module represents the Gitlab CI include:local keyword.

Args

local : str
Relative path to the file within this repository to include.
Expand source code
class IncludeLocal(Include):
    """This module represents the Gitlab CI [include:local](https://docs.gitlab.com/ee/ci/yaml/#includelocal) keyword.

    Args:
        local (str): Relative path to the file within this repository to include.
    """

    def __init__(self, local: str) -> None:
        self._rendered_include = {"local": local}

Ancestors

Inherited members

class IncludeRemote (remote: str)

This module represents the Gitlab CI include:remote keyword.

Args

remote : str
URL to include the file from.

Raises

ValueError
If remote is not a valid URL.
Expand source code
class IncludeRemote(Include):
    """This module represents the Gitlab CI [include:remote](https://docs.gitlab.com/ee/ci/yaml/#includeremote) keyword.

    Args:
        remote (str): URL to include the file from.

    Raises:
        ValueError: If `remote` is not a valid URL.
    """

    def __init__(self, remote: str) -> None:
        if not is_valid_url(remote):
            raise ValueError(f"`remote` is not a valid URL: {remote}")

        self._rendered_include = {"remote": remote}

Ancestors

Inherited members

class IncludeTemplate (template: str)

This class represents the Gitlab CI include:template keyword.

Args

template : str
Gitlab template pipeline to include.
Expand source code
class IncludeTemplate(Include):
    """This class represents the Gitlab CI [include:template](https://docs.gitlab.com/ee/ci/yaml/#includetemplate) keyword.

    Args:
        template (str): Gitlab template pipeline to include.
    """

    def __init__(self, template: str) -> None:
        self._rendered_include = {"template": template}

Ancestors

Inherited members

class Job (*, script: Union[AnyStr, List[str]], name: Optional[str] = None, stage: Optional[str] = None, image: Optional[Union[Image, str]] = None, allow_failure: Optional[Union[bool, str, int, List[int]]] = None, variables: Optional[Dict[str, str]] = None, tags: Optional[List[str]] = None, rules: Optional[List[Rule]] = None, dependencies: Optional[List[Union[JobSequence]]] = None, needs: Optional[List[Union[NeedJobSequence]]] = None, artifacts: Optional[Artifacts] = None, cache: Optional[Cache] = None, when: Optional[WhenStatement] = None, environment: Optional[Union[Environment, str]] = None, retry: Optional[Union[Retry, int]] = None, timeout: Optional[str] = None, resource_group: Optional[str] = None)

This class represents the Gitlab CI Job

Attributes

script : Union[AnyStr, List[str]]
The script(s) to be executed.
name : Optional[str]
The name of the job. In opposite to stage only the name is set and not the stage of the job. If name is set, than the jobs stage has no value, which defaults to the 'test' stage. Either name or stage must be set. Defaults to None.
stage : Optional[str]
The name and stage of the job. In opposite to name also the jobs stage will be setup with this value. Either name or stage must be set. Defaults to None.
allow_failure : Optional[bool]
The allow_failure keyword of the Job. Defaults to None (unset).
Expand source code
class Job:
    """This class represents the Gitlab CI [Job](https://docs.gitlab.com/ee/ci/yaml/#job-keywords)

    Attributes:
        script (Union[AnyStr, List[str]]): The [script(s)](https://docs.gitlab.com/ee/ci/yaml/#script) to be executed.
        name (Optional[str]): The name of the job. In opposite to `stage` only the name is set and not the stage of the job.
            If `name` is set, than the jobs stage has no value, which defaults to the 'test' stage.
            Either `name` or `stage` must be set. Defaults to `None`.
        stage (Optional[str]): The name and stage of the job. In opposite to `name` also the jobs stage will be setup with this value.
            Either `name` or `stage` must be set. Defaults to `None`.
        allow_failure (Optional[bool]): The [allow_failure](https://docs.gitlab.com/ee/ci/yaml/#allow_failure) keyword of the Job.
            Defaults to `None` (unset).
    """

    def __init__(
        self,
        *,
        script: Union[AnyStr, List[str]],
        name: Optional[str] = None,
        stage: Optional[str] = None,
        image: Optional[Union[Image, str]] = None,
        allow_failure: Optional[Union[bool, str, int, List[int]]] = None,
        variables: Optional[Dict[str, str]] = None,
        tags: Optional[List[str]] = None,
        rules: Optional[List[Rule]] = None,
        dependencies: Optional[List[Union[Job, Sequence]]] = None,
        needs: Optional[List[Union[Need, Job, Sequence]]] = None,
        artifacts: Optional[Artifacts] = None,
        cache: Optional[Cache] = None,
        when: Optional[WhenStatement] = None,
        environment: Optional[Union[Environment, str]] = None,
        retry: Optional[Union[Retry, int]] = None,
        timeout: Optional[str] = None,
        resource_group: Optional[str] = None,
    ) -> None:
        self._image: Optional[Image] = None
        self._variables: Dict[str, str] = {}
        self._tags: OrderedSetType = {}
        self._rules: List[Rule] = []
        self._dependencies: Optional[List[Union[Job, Sequence]]] = None
        self._needs: Optional[List[Union[Need, Job, Sequence]]] = None
        self._scripts: List[str]
        self._scripts_to_prepend: List[str] = []
        self._scripts_to_append: List[str] = []
        self._artifacts: Optional[Artifacts] = artifacts
        self._cache: Optional[Cache] = cache
        self._environment: Optional[Environment] = None
        self._retry: Optional[Retry] = None
        self._parents: List[Sequence] = list()
        self._original: Optional[Job]
        self._when: Optional[WhenStatement] = when
        self._timeout: Optional[str] = timeout
        self._resource_group: Optional[str] = resource_group

        if stage and name:
            self._name = f"{name}-{stage}"
            self._stage = stage
        elif stage:
            self._name = stage
            self._stage = stage
        elif name:
            self._name = name
            # default for unset stages is 'test' -> https://docs.gitlab.com/ee/ci/yaml/#stages
            self._stage = "test"
        else:
            raise ValueError(
                "At least one of the parameters `name` or `stage` have to be set."
            )

        self._name = self._name.replace("_", "-")
        self._stage = self._stage.replace("-", "_")

        if isinstance(script, str):
            self._scripts = [script]
        elif isinstance(script, list):
            self._scripts = script
        else:
            raise AttributeError(
                "script parameter must be of type string or list of strings"
            )

        # internally self._allow_failure is set to a special value 'untouched' indicating this value is untouched by the user.
        # This is because the user can set the value from outside to True, False or None, indicating the value should not be rendered.
        # 'untouched' allows for sequences to determine, if this value should be initialized or not.
        self._allow_failure: Optional[Union[bool, str, int, List[int]]] = (
            allow_failure if allow_failure is not None else "untouched"
        )

        if image:
            self.set_image(image)
        if tags:
            self.add_tags(*tags)
        if rules:
            self.append_rules(*rules)
        if dependencies:
            self.add_dependencies(*dependencies)
        if needs:
            self.add_needs(*needs)
        if variables:
            self.add_variables(**variables)
        if environment:
            self.set_environment(environment)
        if retry:
            self.set_retry(retry)

    @property
    def name(self) -> str:
        """The name of the Job

        This property is affected by the rendering process, where `gcip.core.sequence.Sequence`s will
        populate the job name depending on their names. That means you can be sure to get the jobs
        final name when rendered.
        """
        return self._name

    @property
    def stage(self) -> str:
        """The [stage](https://docs.gitlab.com/ee/ci/yaml/#stage) keyword of the Job

        This property is affected by the rendering process, where `gcip.core.sequence.Sequence`s will
        populate the job stage depending on their stages. That means you can be sure to get the jobs
        final stage when rendered.
        """
        return self._stage

    @property
    def image(self) -> Optional[Image]:
        """The [image](https://docs.gitlab.com/ee/ci/yaml/#image) keyword of the Job"""
        return self._image

    @property
    def allow_failure(self) -> Optional[Union[bool, str, int, List[int]]]:
        """The [allow_failure](https://docs.gitlab.com/ee/ci/yaml/#allow_failure) keyword of the Job.

        A value of `None` means this key is unset and thus not contained in the rendered output.
        """
        if (
            self._allow_failure is None
            or isinstance(self._allow_failure, bool)
            or isinstance(self._allow_failure, int)
            or isinstance(self._allow_failure, list)
        ):
            return self._allow_failure
        return None

    @property
    def variables(self) -> Dict[str, str]:
        """The [variables](https://docs.gitlab.com/ee/ci/yaml/#variables) keyword of the Job"""
        return self._variables

    @property
    def tags(self) -> List[str]:
        """The [tags](https://docs.gitlab.com/ee/ci/yaml/#tags) keyword of the Job"""
        return list(self._tags.keys())

    @property
    def rules(self) -> List[Rule]:
        """The [rules](https://docs.gitlab.com/ee/ci/yaml/#rules) keyword of the Job"""
        return self._rules

    @property
    def dependencies(self) -> Optional[List[Union[Job, Sequence]]]:
        """The [dependencies](https://docs.gitlab.com/ee/ci/yaml/#dependencies) keyword of the Job"""
        return self._dependencies

    @property
    def needs(self) -> Optional[List[Union[Need, Job, Sequence]]]:
        """The [needs](https://docs.gitlab.com/ee/ci/yaml/#needs) keyword of the Job"""
        return self._needs

    @property
    def scripts(self) -> List[str]:
        """The [script](https://docs.gitlab.com/ee/ci/yaml/#script) keyword of the Job"""
        return self._scripts

    @property
    def cache(self) -> Optional[Cache]:
        """The [cache](https://docs.gitlab.com/ee/ci/yaml/#cache) keyword of the Job"""
        return self._cache

    @property
    def when(self) -> Optional[WhenStatement]:
        """The [when](https://docs.gitlab.com/ee/ci/yaml/#when) keyword of the Job"""
        return self._when

    @property
    def timeout(self) -> Optional[str]:
        """The [timeout](https://docs.gitlab.com/ee/ci/yaml/#timeout) keyword of the Job"""
        return self._timeout

    @property
    def resource_group(self) -> Optional[str]:
        """The [resource_group](https://docs.gitlab.com/ee/ci/yaml/#resource_group) keyword of the Job"""
        return self._resource_group

    @property
    def environment(self) -> Optional[Environment]:
        """The [environment](https://docs.gitlab.com/ee/ci/yaml/#environmentname) keyword of the Job"""
        return self._environment

    @property
    def retry(self) -> Optional[Retry]:
        """The [retry](https://docs.gitlab.com/ee/ci/yaml/#retry) keyword of the Job"""
        return self._retry

    @property
    def artifacts(self) -> Artifacts:
        """The [artifacts](https://docs.gitlab.com/ee/ci/yaml/#artifacts) keyword of the Job."""
        if not self._artifacts:
            self._artifacts = Artifacts()
        return self._artifacts

    def _extend_name(self, name: Optional[str]) -> None:
        """This method is used by `gcip.core.sequence.Sequence`s to populate the jobs name."""
        if name:
            self._name = name.replace("_", "-") + f"-{self._name}"

    def _extend_stage_value(self, stage: Optional[str]) -> None:
        """This method is used by `gcip.core.sequence.Sequence`s to populate the jobs stage."""
        if stage:
            self._stage += "_" + stage.replace("-", "_")

    def _extend_stage(self, stage: Optional[str]) -> None:
        """This method is used by `gcip.core.sequence.Sequence`s to populate the jobs name and stage."""
        if stage:
            self._extend_name(stage)
            self._extend_stage_value(stage)

    def _add_parent(self, parent: Sequence) -> None:
        """This method is called by `gcip.core.sequence.Sequence`s when the job is added to that sequence.

        The job needs to know its parents when `_get_all_instance_names()` is called.
        """
        self._parents.append(parent)

    def prepend_scripts(self, *scripts: str) -> Job:
        """Inserts one or more [script](https://docs.gitlab.com/ee/ci/yaml/#script)s before the current scripts.

        Returns:
            `Job`: The modified `Job` object.
        """
        self._scripts_to_prepend = list(scripts) + self._scripts_to_prepend
        return self

    def append_scripts(self, *scripts: str) -> Job:
        """Adds one or more [script](https://docs.gitlab.com/ee/ci/yaml/#script)s after the current scripts.

        Returns:
            `Job`: The modified `Job` object.
        """
        self._scripts_to_append.extend(scripts)
        return self

    def add_variables(self, **variables: str) -> Job:
        """Adds one or more [variables](https://docs.gitlab.com/ee/ci/yaml/#variables), each as keyword argument,
        to the job.

        Args:
            **variables (str): Each variable would be provided as keyword argument:
        ```
        job.add_variables(GREETING="hello", LANGUAGE="python")
        ```

        Returns:
            `Job`: The modified `Job` object.
        """
        self._variables.update(variables)
        return self

    def add_tags(self, *tags: str) -> Job:
        """Adds one or more [tags](https://docs.gitlab.com/ee/ci/yaml/#tags) to the job.

        Returns:
            `Job`: The modified `Job` object.
        """
        for tag in tags:
            self._tags[tag] = None
        return self

    def set_tags(self, *tags: str) -> Job:
        """Set the [tags](https://docs.gitlab.com/ee/ci/yaml/#tags) to the job.

        Returns:
            `Job`: The modified `Job` object.
        """
        self._tags = {}
        self.add_tags(*tags)
        return self

    def set_cache(self, cache: Optional[Cache]) -> Job:
        """Sets the [cache](https://docs.gitlab.com/ee/ci/yaml/#cache) keyword of the Job.

        Any previous values will be overwritten.

        Args:
            cache (Optional[Cache]): See `gcip.core.cache.Cache` class.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        if cache:
            self._cache = cache
        return self

    def set_when(self, when: Optional[WhenStatement]) -> Job:
        """Sets the [when](https://docs.gitlab.com/ee/ci/yaml/#when) keyword of the Job.

        Any previous values will be overwritten.

        Args:
            when (Optional[WhenStatement]): See `gcip.core.when.WhenStatement` class.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        if when:
            self._when = when
        return self

    def set_timeout(self, timeout: Optional[str]) -> Job:
        """Sets the [timeout](https://docs.gitlab.com/ee/ci/yaml/#timeout) keyword of the Job.

        Any previous values will be overwritten.

        Args:
            timeout (Optional[str]): A string defining a timespan as in the Gitlab CI documentation.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        if timeout:
            self._timeout = timeout
        return self

    def set_resource_group(self, resource_group: Optional[str]) -> Job:
        """Sets the [resource_group](https://docs.gitlab.com/ee/ci/yaml/#resource_group) keyword of the Job.

        Any previous values will be overwritten.

        Args:
            resource_group (Optional[str]): A string defining a resource group as in the Gitlab CI documentation.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        if resource_group:
            self._resource_group = resource_group
        return self

    def set_environment(self, environment: Optional[Union[Environment, str]]) -> Job:
        """Sets the environment of this job.

        For a simple environment you can provide the environment as string.
        If you want to set the environment url or other options, you have to provide an Environment object instead.

        Args:
            environment (Optional[Union[Environment, str]]): Can be either `string` or `Environment`.

        Returns:
            Job: Returns the modified :class:`Job` object.
        """
        if environment:
            if isinstance(environment, str):
                environment = Environment(environment)
            self._environment = environment
        return self

    def set_retry(self, retry: Optional[Union[Retry, int]]) -> Job:
        """Sets the retry count of this job.

        For a simple retry you can provide the retry count as number.
        If you want to set the when condition or exit codes, you have to provide an retry object instead.

        Args:
            retry (Optional[Union[Retry, int]]): Can be either `int` or `retry`.

        Returns:
            Job: Returns the modified :class:`Job` object.
        """
        if retry:
            if isinstance(retry, int):
                retry = Retry(max=retry)
            self._retry = retry
        return self

    def set_artifacts(self, artifacts: Optional[Artifacts]) -> Job:
        """Sets the [artifacts](https://docs.gitlab.com/ee/ci/yaml/#artifacts) keyword of the Job.

        Any previous values will be overwritten.

        Args:
            artifacts: (Artifacts): See `gcip.core.artifacts.Artifacts` class.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        if artifacts:
            self._artifacts = artifacts
        return self

    def append_rules(self, *rules: Rule) -> Job:
        """Adds one or more  [rule](https://docs.gitlab.com/ee/ci/yaml/#rules)s behind the current rules of the job.

        Args:
            *rules (Rule): See `gcip.core.rule.Rule` class.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        self._rules.extend(rules)
        return self

    def prepend_rules(self, *rules: Rule) -> Job:
        """Inserts one or more  [rule](https://docs.gitlab.com/ee/ci/yaml/#rules)s before the current rules of the job.

        Args:
            *rules (Rule): See `gcip.core.rule.Rule` class.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        self._rules = list(rules) + self._rules
        return self

    def add_dependencies(self, *dependencies: Union[Job, Sequence]) -> Job:
        """Add one or more [dependencies](https://docs.gitlab.com/ee/ci/yaml/#dependencies) to the job.

        Args:
            *dependencies (Union[Need, Job, Sequence]):

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        if self._dependencies is None:
            self._dependencies = []
        self._dependencies.extend(dependencies)
        return self

    def set_dependencies(
        self, dependencies: Optional[List[Union[Job, Sequence]]]
    ) -> Job:
        """Set/overwrite the list of [dependencies](https://docs.gitlab.com/ee/ci/yaml/index.html#dependencies) of this job.

        Args:
           dependencies (Optional[List[Union[Job, Sequence]]]): A list of `Need`s, `Job`s or `Sequence`s this job
               depends on. If the list is empty, the job dependencies nothing and would immediately run. If `None` given,
               then the `dependencies` keyword of this job will not be rendered in the pipeline output.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        self._dependencies = dependencies
        return self

    def add_needs(self, *needs: Union[Need, Job, Sequence]) -> Job:
        """Add one or more [needs](https://docs.gitlab.com/ee/ci/yaml/#needs) to the job.

        Args:
            *needs (Union[Need, Job, Sequence]):

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        if self._needs is None:
            self._needs = []
        self._needs.extend(needs)
        return self

    def set_needs(self, needs: Optional[List[Union[Need, Job, Sequence]]]) -> Job:
        """Set/overwrite the list of [needs](https://docs.gitlab.com/ee/ci/yaml/#needs) of this job.

        Args:
           needs (Optional[List[Union[Need, Job, Sequence]]]): A list of `Need`s, `Job`s or `Sequence`s this job
               depends on. If the list is empty, the job needs nothing and would immediately run. If `None` given,
               then the `needs` keyword of this job will not be rendered in the pipeline output.

        Returns:
            Sequence: Returns the modified `Job` object.
        """
        self._needs = needs
        return self

    def set_image(self, image: Optional[Union[Image, str]]) -> Job:
        """Sets the image of this job.

        For a simple container image you can provide the origin of the image.
        If you want to set the entrypoint, you have to provide an Image object instead.

        Args:
            image (Optional[Union[Image, str]]): Can be either `string` or `Image`.

        Returns:
            Job: Returns the modified :class:`Job` object.
        """
        if image:
            if isinstance(image, str):
                image = Image(image)
            self._image = image
        return self

    def set_allow_failure(
        self, allow_failure: Optional[Union[bool, str, int, List[int]]]
    ) -> Job:
        """Sets `allow_failure` for this job.

        Args:
            allow_failure (Optional[Union[bool, str, int, List[int]]]): The value `None` means that `allow_failure`
                is unset and is not rendered in the output of this job.
        """
        self._allow_failure = allow_failure
        return self

    def _get_all_instance_names(self) -> Set[str]:
        """Query all the possible names this job can have by residing within parent `gcip.core.sequence.Sequence`s.

        The possible image names are built by the `name` of this job plus all the possible prefix values from
        parent parent `gcip.core.sequence.Sequence`s. The prefix values from parent sequences are their names
        prefixed with the names of the parent parent sequences and so on.

        Imagine Job `A` resides within following sequenes:

        ```
        B:
          A
        C:
          D:
            A
        ```

        Then the instance names of `A` would be `B-A` and `C-D-A`.
        """
        instance_names: Set[str] = set()
        for parent in self._parents:
            for prefix in parent._get_all_instance_names(self):
                if prefix:
                    instance_names.add(f"{prefix}-{self._name}")
                else:
                    instance_names.add(self._name)
        return instance_names

    def _copy(self) -> Job:
        """Returns an independent, deep copy object of this job.

        Returns:
            `Job`: A copy of this job which, when modified, has no effects on this source job.
        """
        job_copy = copy.deepcopy(self)
        job_copy._original = self
        return job_copy

    def render(self) -> Dict[str, Any]:
        """Return a representation of this Job object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Any]: A dictionary representing the job object in Gitlab CI.
        """
        # late import to avoid circular dependencies
        from .sequence import Sequence

        rendered_job: Dict[str, Any] = {}

        if self._image:
            rendered_job["image"] = self._image.render()

        # self._allow_failure should not be rendered when its value is None or
        # the internal special value 'untouched'
        if isinstance(self._allow_failure, bool):
            rendered_job["allow_failure"] = self._allow_failure
        elif isinstance(self._allow_failure, int):
            rendered_job["allow_failure"] = {"exit_codes": [self._allow_failure]}
        elif isinstance(self._allow_failure, list):
            rendered_job["allow_failure"] = {"exit_codes": self._allow_failure}

        if self._dependencies is not None:
            dependency_jobs: List[Job] = list()
            for dependency in self._dependencies:
                if isinstance(dependency, Job):
                    dependency_jobs.append(dependency)
                elif isinstance(dependency, Sequence):
                    for job in dependency.nested_jobs:
                        dependency_jobs.append(job)
                else:
                    raise TypeError(
                        f"Dependency '{dependency}' is of type {type(dependency)}."
                    )

            dependency_names: Set[str] = set()
            for job in dependency_jobs:
                dependency_names.update(job._get_all_instance_names())

            rendered_job["dependencies"] = sorted(dependency_names)

        if self._needs is not None:
            need_jobs: List[Job] = list()
            rendered_needs: List[Dict[str, Union[str, bool]]] = list()
            for need in self._needs:
                if isinstance(need, Job):
                    need_jobs.append(need)
                elif isinstance(need, Sequence):
                    for job in need.last_jobs_executed:
                        need_jobs.append(job)
                elif isinstance(need, Need):
                    rendered_needs.append(need.render())
                else:
                    raise TypeError(f"Need '{need}' is of type {type(need)}.")

            job_names: Set[str] = set()
            for job in need_jobs:
                job_names.update(job._get_all_instance_names())

            for name in job_names:
                rendered_needs.append(Need(name).render())

            # sort needs by the name of the referenced job or pipeline
            rendered_needs = sorted(
                rendered_needs,
                key=lambda need: need["job"] if "job" in need else need["pipeline"],
            )

            rendered_job["needs"] = rendered_needs

        rendered_job.update(
            {
                "stage": self._stage,
                "script": [
                    *self._scripts_to_prepend,
                    *self._scripts,
                    *self._scripts_to_append,
                ],
            }
        )

        if self._variables:
            rendered_job["variables"] = self._variables

        if self._rules:
            rendered_rules = []
            for rule in self._rules:
                rendered_rules.append(rule.render())
            rendered_job.update({"rules": rendered_rules})

        if self._cache:
            rendered_job.update({"cache": self._cache.render()})

        if self._when:
            rendered_job.update({"when": self._when.value})

        if self._timeout:
            rendered_job.update({"timeout": self._timeout})

        if self._resource_group:
            rendered_job.update({"resource_group": self._resource_group})

        if self._artifacts:
            rendered_artifacts = self._artifacts.render()
            if rendered_artifacts:
                rendered_job.update({"artifacts": rendered_artifacts})

        if self._tags.keys():
            rendered_job["tags"] = list(self._tags.keys())

        if self._environment:
            rendered_job["environment"] = self._environment.render()

        if self._retry:
            rendered_job["retry"] = self._retry.render()

        return rendered_job

Subclasses

Instance variables

prop allow_failure : Optional[Union[bool, str, int, List[int]]]

The allow_failure keyword of the Job.

A value of None means this key is unset and thus not contained in the rendered output.

Expand source code
@property
def allow_failure(self) -> Optional[Union[bool, str, int, List[int]]]:
    """The [allow_failure](https://docs.gitlab.com/ee/ci/yaml/#allow_failure) keyword of the Job.

    A value of `None` means this key is unset and thus not contained in the rendered output.
    """
    if (
        self._allow_failure is None
        or isinstance(self._allow_failure, bool)
        or isinstance(self._allow_failure, int)
        or isinstance(self._allow_failure, list)
    ):
        return self._allow_failure
    return None
prop artifactsArtifacts

The artifacts keyword of the Job.

Expand source code
@property
def artifacts(self) -> Artifacts:
    """The [artifacts](https://docs.gitlab.com/ee/ci/yaml/#artifacts) keyword of the Job."""
    if not self._artifacts:
        self._artifacts = Artifacts()
    return self._artifacts
prop cache : Optional[Cache]

The cache keyword of the Job

Expand source code
@property
def cache(self) -> Optional[Cache]:
    """The [cache](https://docs.gitlab.com/ee/ci/yaml/#cache) keyword of the Job"""
    return self._cache
prop dependencies : Optional[List[Union[JobSequence]]]

The dependencies keyword of the Job

Expand source code
@property
def dependencies(self) -> Optional[List[Union[Job, Sequence]]]:
    """The [dependencies](https://docs.gitlab.com/ee/ci/yaml/#dependencies) keyword of the Job"""
    return self._dependencies
prop environment : Optional[Environment]

The environment keyword of the Job

Expand source code
@property
def environment(self) -> Optional[Environment]:
    """The [environment](https://docs.gitlab.com/ee/ci/yaml/#environmentname) keyword of the Job"""
    return self._environment
prop image : Optional[Image]

The image keyword of the Job

Expand source code
@property
def image(self) -> Optional[Image]:
    """The [image](https://docs.gitlab.com/ee/ci/yaml/#image) keyword of the Job"""
    return self._image
prop name : str

The name of the Job

This property is affected by the rendering process, where Sequences will populate the job name depending on their names. That means you can be sure to get the jobs final name when rendered.

Expand source code
@property
def name(self) -> str:
    """The name of the Job

    This property is affected by the rendering process, where `gcip.core.sequence.Sequence`s will
    populate the job name depending on their names. That means you can be sure to get the jobs
    final name when rendered.
    """
    return self._name
prop needs : Optional[List[Union[NeedJobSequence]]]

The needs keyword of the Job

Expand source code
@property
def needs(self) -> Optional[List[Union[Need, Job, Sequence]]]:
    """The [needs](https://docs.gitlab.com/ee/ci/yaml/#needs) keyword of the Job"""
    return self._needs
prop resource_group : Optional[str]

The resource_group keyword of the Job

Expand source code
@property
def resource_group(self) -> Optional[str]:
    """The [resource_group](https://docs.gitlab.com/ee/ci/yaml/#resource_group) keyword of the Job"""
    return self._resource_group
prop retry : Optional[Retry]

The retry keyword of the Job

Expand source code
@property
def retry(self) -> Optional[Retry]:
    """The [retry](https://docs.gitlab.com/ee/ci/yaml/#retry) keyword of the Job"""
    return self._retry
prop rules : List[Rule]

The rules keyword of the Job

Expand source code
@property
def rules(self) -> List[Rule]:
    """The [rules](https://docs.gitlab.com/ee/ci/yaml/#rules) keyword of the Job"""
    return self._rules
prop scripts : List[str]

The script keyword of the Job

Expand source code
@property
def scripts(self) -> List[str]:
    """The [script](https://docs.gitlab.com/ee/ci/yaml/#script) keyword of the Job"""
    return self._scripts
prop stage : str

The stage keyword of the Job

This property is affected by the rendering process, where Sequences will populate the job stage depending on their stages. That means you can be sure to get the jobs final stage when rendered.

Expand source code
@property
def stage(self) -> str:
    """The [stage](https://docs.gitlab.com/ee/ci/yaml/#stage) keyword of the Job

    This property is affected by the rendering process, where `gcip.core.sequence.Sequence`s will
    populate the job stage depending on their stages. That means you can be sure to get the jobs
    final stage when rendered.
    """
    return self._stage
prop tags : List[str]

The tags keyword of the Job

Expand source code
@property
def tags(self) -> List[str]:
    """The [tags](https://docs.gitlab.com/ee/ci/yaml/#tags) keyword of the Job"""
    return list(self._tags.keys())
prop timeout : Optional[str]

The timeout keyword of the Job

Expand source code
@property
def timeout(self) -> Optional[str]:
    """The [timeout](https://docs.gitlab.com/ee/ci/yaml/#timeout) keyword of the Job"""
    return self._timeout
prop variables : Dict[str, str]

The variables keyword of the Job

Expand source code
@property
def variables(self) -> Dict[str, str]:
    """The [variables](https://docs.gitlab.com/ee/ci/yaml/#variables) keyword of the Job"""
    return self._variables
prop when : Optional[WhenStatement]

The when keyword of the Job

Expand source code
@property
def when(self) -> Optional[WhenStatement]:
    """The [when](https://docs.gitlab.com/ee/ci/yaml/#when) keyword of the Job"""
    return self._when

Methods

def add_dependencies(self, *dependencies: Union[JobSequence])

Add one or more dependencies to the job.

Args

*dependencies (Union[Need, Job, Sequence]):

Returns

Sequence
Returns the modified Job object.
def add_needs(self, *needs: Union[NeedJobSequence])

Add one or more needs to the job.

Args

*needs (Union[Need, Job, Sequence]):

Returns

Sequence
Returns the modified Job object.
def add_tags(self, *tags: str) ‑> Job

Adds one or more tags to the job.

Returns

Job: The modified Job object.

def add_variables(self, **variables: str) ‑> Job

Adds one or more variables, each as keyword argument, to the job.

Args

**variables : str
Each variable would be provided as keyword argument:
job.add_variables(GREETING="hello", LANGUAGE="python")

Returns

Job: The modified Job object.

def append_rules(self, *rules: Rule) ‑> Job

Adds one or more rules behind the current rules of the job.

Args

*rules : Rule
See Rule class.

Returns

Sequence
Returns the modified Job object.
def append_scripts(self, *scripts: str) ‑> Job

Adds one or more scripts after the current scripts.

Returns

Job: The modified Job object.

def prepend_rules(self, *rules: Rule) ‑> Job

Inserts one or more rules before the current rules of the job.

Args

*rules : Rule
See Rule class.

Returns

Sequence
Returns the modified Job object.
def prepend_scripts(self, *scripts: str) ‑> Job

Inserts one or more scripts before the current scripts.

Returns

Job: The modified Job object.

def render(self) ‑> Dict[str, Any]

Return a representation of this Job object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Any]
A dictionary representing the job object in Gitlab CI.
def set_allow_failure(self, allow_failure: Optional[Union[bool, str, int, List[int]]]) ‑> Job

Sets allow_failure for this job.

Args

allow_failure : Optional[Union[bool, str, int, List[int]]]
The value None means that allow_failure is unset and is not rendered in the output of this job.
def set_artifacts(self, artifacts: Optional[Artifacts]) ‑> Job

Sets the artifacts keyword of the Job.

Any previous values will be overwritten.

Args

artifacts
(Artifacts): See Artifacts class.

Returns

Sequence
Returns the modified Job object.
def set_cache(self, cache: Optional[Cache]) ‑> Job

Sets the cache keyword of the Job.

Any previous values will be overwritten.

Args

cache : Optional[Cache]
See Cache class.

Returns

Sequence
Returns the modified Job object.
def set_dependencies(self, dependencies: Optional[List[Union[JobSequence]]])

Set/overwrite the list of dependencies of this job.

Args

dependencies : Optional[List[Union[Job, Sequence]]]
A list of Needs, Jobs or Sequences this job depends on. If the list is empty, the job dependencies nothing and would immediately run. If None given, then the dependencies keyword of this job will not be rendered in the pipeline output.

Returns

Sequence
Returns the modified Job object.
def set_environment(self, environment: Optional[Union[Environment, str]]) ‑> Job

Sets the environment of this job.

For a simple environment you can provide the environment as string. If you want to set the environment url or other options, you have to provide an Environment object instead.

Args

environment : Optional[Union[Environment, str]]
Can be either string or Environment.

Returns

Job
Returns the modified :class:Job object.
def set_image(self, image: Optional[Union[Image, str]]) ‑> Job

Sets the image of this job.

For a simple container image you can provide the origin of the image. If you want to set the entrypoint, you have to provide an Image object instead.

Args

image : Optional[Union[Image, str]]
Can be either string or Image.

Returns

Job
Returns the modified :class:Job object.
def set_needs(self, needs: Optional[List[Union[NeedJobSequence]]])

Set/overwrite the list of needs of this job.

Args

needs : Optional[List[Union[Need, Job, Sequence]]]
A list of Needs, Jobs or Sequences this job depends on. If the list is empty, the job needs nothing and would immediately run. If None given, then the needs keyword of this job will not be rendered in the pipeline output.

Returns

Sequence
Returns the modified Job object.
def set_resource_group(self, resource_group: Optional[str]) ‑> Job

Sets the resource_group keyword of the Job.

Any previous values will be overwritten.

Args

resource_group : Optional[str]
A string defining a resource group as in the Gitlab CI documentation.

Returns

Sequence
Returns the modified Job object.
def set_retry(self, retry: Optional[Union[Retry, int]]) ‑> Job

Sets the retry count of this job.

For a simple retry you can provide the retry count as number. If you want to set the when condition or exit codes, you have to provide an retry object instead.

Args

retry : Optional[Union[Retry, int]]
Can be either int or retry.

Returns

Job
Returns the modified :class:Job object.
def set_tags(self, *tags: str) ‑> Job

Set the tags to the job.

Returns

Job: The modified Job object.

def set_timeout(self, timeout: Optional[str]) ‑> Job

Sets the timeout keyword of the Job.

Any previous values will be overwritten.

Args

timeout : Optional[str]
A string defining a timespan as in the Gitlab CI documentation.

Returns

Sequence
Returns the modified Job object.
def set_when(self, when: Optional[WhenStatement]) ‑> Job

Sets the when keyword of the Job.

Any previous values will be overwritten.

Args

when : Optional[WhenStatement]
See WhenStatement class.

Returns

Sequence
Returns the modified Job object.
class JobFilter (*, script: Optional[Union[str, List[str]]] = None, name: Optional[str] = None, stage: Optional[str] = None, image: Optional[Union[Image, str]] = None, allow_failure: Optional[Union[bool, str, int, List[int]]] = None, variables: Optional[Dict[str, str]] = None, tags: Optional[Union[str, List[str]]] = None, rules: Optional[Union[Rule, List[Rule]]] = None, dependencies: Optional[Union[str, JobSequence, List[Union[str, JobSequence]]]] = None, needs: Optional[Union[str, NeedJobSequence, List[Union[str, NeedJobSequence]]]] = None, artifacts: Optional[Union[Artifacts, List[str]]] = None, cache: Optional[Union[Cache, List[str]]] = None, when: Optional[WhenStatement] = None, timeout: Optional[str] = None, resource_group: Optional[str] = None, environment: Optional[Union[Environment, str]] = None, retry: Optional[Union[Retry, int]] = None)

This class is used to check if Jobs matches certain criterias.

When created, you could use the equals method, to check if Jobs match the JobFilter criterias:

filter = JobFilter(script="foo.*")
job = Job(stage="test", script="foobar")
assert filter.match(job)

Check the arguments for all optional criterias:

Args

script : Optional[Union[str, List[str]]], optional
Could be a single or a list of regular expressions. A job matches if for every regex provided there is a matching script in the job. Defaults to None.
name : Optional[str], optional
A job matches if the regex provided matches the jobs name. ATTENTION: A jobs name is always composed of the name and stage parameter given to its init-Method, separated by a dash. Also all underscores are replaced by a dash. Defaults to None.
stage : Optional[str], optional
A job matches if the regex provided matches the jobs stage. ATTENTION: A jobs stage is always the stage given to the jobs init-Method with all dashres replaces by underscores. Defaults to None.
image : Optional[Union[Image, str]], optional
A job matches depending on the type of the value provided. If the parameter a regex (str), a job matches if the regex matches to the jobs image name. If the parameter is an Image, a job matches if the attributes of the Image provided equals to the Image attributes of the job. Defaults to None.
allow_failure : Optional[bool]
A job matches if allow_failure matches to the value of this filter. The filter allows two special string values 'untouched' - which filters out jobs whose 'allow_failure' value has not been set before - as well as 'none' - which filters out jobs whose 'allow_failure' value has explicitly been set to None by the user.
variables : Optional[Dict[str, str]], optional
The keys of the dictionary provided are variable names, the values are regular expressions. A job matches if it contains all variable names provided and their values matches the appropriate regular expressions. Defaults to None.
tags : Optional[Union[str, List[str]]]
Could be a single or a list of regular expressions. A job matches if for every regex provided there is a matching tag in the job. Defaults to None.
rules : Optional[Union[Rule, List[Rule]]], optional
A job matches if he contains all rules provided. The rules are compared by their equality of their attributes. Defaults to None.
dependencies : Optional[Union[str, Job, Sequence, List[Union[str, Job, Sequence]]]]
A Job matches depending on the type of the value provided. If the value is a (list of) Jobs or Sequences, a job matches if that jobs dependencies contains all the Jobs and Sequences provided. Jobs and Sequences are compared by their identity. If the value is a list of strings representing regular expressions, a job matches if for every regex provided there is a need whith a job name matching to this regex. If the dependency is a sequence, at least one job from the sequence must match. Defaults to None.
needs : Optional[List[Union[str, Need, Job, Sequence]]], optional
A job matches depending on the type of the value provided. If the value is a (list of) Jobs, Sequences or Needs, a job matches if that jobs needs contains all the Jobs, Sequences and Needs provided. Jobs and Sequences are compared by their identity. Needs are compared by their equality of their attributes. If the value is a list of strings representing regular expressions, a job matches if for every regex provided there is a need whith a job name matching to this regex. If the Need is a sequence, at least one job from the last stage must match. Defaults to None.
artifacts : Optional[Union[Artifacts, List[str]]], optional
A job matches depending on the type of the value provided. If the value is an Artifacts, a job matches if its artifacts properties equals to the provided artifacts properties. If the value is a list of strings as regular expressions, a job matches if for every regex provided there is at least one matching path in the jobs artifacts object. Defaults to None.
cache : Optional[Union[Cache, List[str]]], optional
A job matches depending on the type of the value provided. If the value is a Cache, a job matches if its Cache properties equals to the provided cache properties. If the value is a list of strings as regular expressions, a job matches if for every regex provided there is at least one matching path in the jobs artifacts object. ATTENTION: A caches internal path always starts with './'. Defaults to None.
when : Optional[WhenStatement], optional
A job matches, if the value of the WhenStatement enum is equal to the one of the filter.
timeout : Optional[str]
A job matches if the value is equal to the one of the filter.
resource_group : Optional[str]
A job matches if the value is equal to the one of the filter.
environment : Optional[Union[Environment, str]], optional
A job matches depending on the type of the value provided. If the parameter a regex (str), a job matches if the regex matches to the jobs environment name. If the parameter is an Environment, a job matches if the attributes of the Environment provided equals to the Environment attributes of the job. Defaults to None.
retry : Optional[Retry, int]
A job matches if either the given Retry objects match or the Jobs retry max count matches to the given number.
Expand source code
class JobFilter:
    def __init__(
        self,
        *,
        script: Optional[Union[str, List[str]]] = None,
        name: Optional[str] = None,
        stage: Optional[str] = None,
        image: Optional[Union[Image, str]] = None,
        allow_failure: Optional[Union[bool, str, int, List[int]]] = None,
        variables: Optional[Dict[str, str]] = None,
        tags: Optional[Union[str, List[str]]] = None,
        rules: Optional[Union[Rule, List[Rule]]] = None,
        dependencies: Optional[
            Union[str, Job, Sequence, List[Union[str, Job, Sequence]]]
        ] = None,
        needs: Optional[
            Union[str, Need, Job, Sequence, List[Union[str, Need, Job, Sequence]]]
        ] = None,
        artifacts: Optional[Union[Artifacts, List[str]]] = None,
        cache: Optional[Union[Cache, List[str]]] = None,
        when: Optional[WhenStatement] = None,
        timeout: Optional[str] = None,
        resource_group: Optional[str] = None,
        environment: Optional[Union[Environment, str]] = None,
        retry: Optional[Union[Retry, int]] = None,
    ) -> None:
        """
        This class is used to check if Jobs matches certain criterias.

        When created, you could use the `equals` method, to check if `Job`s match the `JobFilter` criterias:

        ```
        filter = JobFilter(script="foo.*")
        job = Job(stage="test", script="foobar")
        assert filter.match(job)
        ```

        Check the arguments for all optional criterias:

        Args:
            script (Optional[Union[str, List[str]]], optional): Could be a single or a list of regular expressions. A job matches if for every regex provided
                there is a matching script in the job. Defaults to None.
            name (Optional[str], optional): A job matches if the regex provided matches the jobs name. ATTENTION: A jobs name is always composed of the name
                and stage parameter given to its init-Method, separated by a dash. Also all underscores are replaced by a dash. Defaults to None.
            stage (Optional[str], optional): A job matches if the regex provided matches the jobs stage. ATTENTION: A jobs stage is always the stage given to
                the jobs init-Method with all dashres replaces by underscores. Defaults to None.
            image (Optional[Union[Image, str]], optional): A job matches depending on the type of the value provided. If the parameter a regex (str), a job
                matches if the regex matches to the jobs image name. If the parameter is an `Image`, a job matches if the attributes of the `Image` provided
                equals to the `Image` attributes of the job. Defaults to None.
            allow_failure (Optional[bool]): A job matches if `allow_failure` matches to the value of this filter. The filter allows two special string
                values 'untouched' - which filters out jobs whose 'allow_failure' value has not been set before - as well as 'none' - which filters out
                jobs whose 'allow_failure' value has explicitly been set to None by the user.
            variables (Optional[Dict[str, str]], optional): The keys of the dictionary provided are variable names, the values are regular expressions.
                A job matches if it contains all variable names provided and their values matches the appropriate regular expressions. Defaults to None.
            tags (Optional[Union[str, List[str]]]): Could be a single or a list of regular expressions. A job matches if for every regex provided there is a
                matching tag in the job. Defaults to None.
            rules (Optional[Union[Rule, List[Rule]]], optional): A job matches if he contains all rules provided. The rules are compared by their equality
                of their attributes. Defaults to None.
            dependencies (Optional[Union[str, Job, Sequence, List[Union[str, Job, Sequence]]]]): A Job matches depending on the type of the value provided.
                If the value is a (list of) `Job`s or `Sequence`s, a job matches if that jobs `dependencies` contains all the Jobs and Sequences provided.
                Jobs and Sequences are compared by their identity. If the value is a list of strings representing regular expressions, a job matches if for
                every regex provided there is a need whith a job name matching to this regex. If the dependency is a sequence, at least one job from the sequence
                must match. Defaults to None.
            needs (Optional[List[Union[str, Need, Job, Sequence]]], optional): A job matches depending on the type of the value provided. If the value is a
                 (list of) `Job`s, `Sequence`s or `Need`s, a job matches if that jobs `needs` contains all the Jobs, Sequences and Needs provided. Jobs and
                 Sequences are compared by their identity. Needs are compared by their equality of their attributes. If the value is a list of strings
                 representing regular expressions, a job matches if for every regex provided there is a need whith a job name matching to this regex. If the
                 Need is a sequence, at least one job from the last stage must match. Defaults to None.
            artifacts (Optional[Union[Artifacts, List[str]]], optional): A job matches depending on the type of the value provided. If the value is an
                `Artifacts`, a job matches if its artifacts properties equals to the provided artifacts properties. If the value is a list of strings as
                regular expressions, a job matches if for every regex provided there is at least one matching path in the jobs artifacts object.
                Defaults to None.
            cache (Optional[Union[Cache, List[str]]], optional): A job matches depending on the type of the value provided. If the value is a `Cache`,
                a job matches if its `Cache` properties equals to the provided cache properties. If the value is a list of strings as regular expressions,
                a job matches if for every regex provided there is at least one matching path in the jobs artifacts object.
                ATTENTION: A caches internal path always starts with './'.  Defaults to None.
            when (Optional[WhenStatement], optional): A job matches, if the value of the WhenStatement enum is equal to the one of the filter.
            timeout (Optional[str]): A job matches if the value is equal to the one of the filter.
            resource_group (Optional[str]): A job matches if the value is equal to the one of the filter.
            environment (Optional[Union[Environment, str]], optional): A job matches depending on the type of the value provided. If the parameter a regex (str), a job
                matches if the regex matches to the jobs environment name. If the parameter is an `Environment`, a job matches if the attributes of the `Environment` provided
                equals to the `Environment` attributes of the job. Defaults to None.
            retry (Optional[Retry, int]): A job matches if either the given Retry objects match or the Jobs retry max count matches to the given number.
        """
        self._script: Optional[List[str]]
        if isinstance(script, str):
            self._script = [script]
        else:
            self._script = script

        self._name = name
        self._stage = stage
        self._image = image
        self._allow_failure = allow_failure
        self._variables = variables

        self._tags: Optional[List[str]]
        if isinstance(tags, str):
            self._tags = [tags]
        else:
            self._tags = tags

        self._rules: Optional[List[Rule]]
        if isinstance(rules, Rule):
            self._rules = [rules]
        else:
            self._rules = rules

        # late import to avoid circular dependencies
        from .sequence import Sequence

        self._dependencies: Optional[List[Union[str, Job, Sequence]]]
        if (
            isinstance(dependencies, str)
            or isinstance(dependencies, Job)
            or isinstance(dependencies, Sequence)
        ):
            self._dependencies = [dependencies]
        else:
            self._dependencies = dependencies

        self._needs: Optional[List[Union[str, Need, Job, Sequence]]]
        if (
            isinstance(needs, str)
            or isinstance(needs, Need)
            or isinstance(needs, Job)
            or isinstance(needs, Sequence)
        ):
            self._needs = [needs]
        else:
            self._needs = needs
        self._artifacts = artifacts
        self._cache = cache
        self._when = when
        self._timeout = timeout
        self._resource_group = resource_group
        self._environment = environment
        self._retry = retry

    # flake8: noqa: C901
    def match(self, job: Job) -> bool:
        if self._script:
            for script in self._script:
                match_in_this_iteration = False
                for job_script in job._scripts:
                    if re.match(script, job_script):
                        match_in_this_iteration = True
                        break
                if not match_in_this_iteration:
                    return False

        if self._name and not re.match(self._name, job._name):
            return False

        if self._stage and not re.match(self._stage, job._stage):
            return False

        if self._image:
            if not job._image:
                return False
            elif isinstance(self._image, Image) and not self._image._equals(job._image):
                return False
            elif isinstance(self._image, str) and not re.match(
                self._image, str(job._image.render()["name"])
            ):
                return False

        if self._allow_failure:
            if job._allow_failure is None:
                if self._allow_failure != "none":
                    return False
            elif self._allow_failure != job._allow_failure:
                return False

        if self._variables:
            for key in self._variables.keys():
                if key not in job._variables:
                    return False
                elif not re.match(self._variables[key], job._variables[key]):
                    return False

        if self._tags:
            for tag in self._tags:
                match_in_this_iteration = False
                for job_tag in job._tags.keys():
                    if re.match(tag, job_tag):
                        match_in_this_iteration = True
                        break
                if not match_in_this_iteration:
                    return False

        if self._rules:
            for self_rule in self._rules:
                match_in_this_iteration = False
                for job_rule in job._rules:
                    if self_rule._equals(job_rule):
                        match_in_this_iteration = True
                        break
                if not match_in_this_iteration:
                    return False

        if self._dependencies:
            if job._dependencies is None:
                return False
            else:
                # because the language checker does not recognise we have already checked, that
                # `job._dependencies` is not None, we need to create a new variable that we use in the
                # following code. The language checker accepts `job_dependencies` as not None
                job_dependencies = job._dependencies

            # late import to avoid circular dependencies
            from .sequence import Sequence

            for dependency in self._dependencies:
                if (
                    isinstance(dependency, Job) or isinstance(dependency, Sequence)
                ) and dependency not in job_dependencies:
                    return False

                match_in_this_iteration = False
                if isinstance(dependency, str):
                    for job_dependency in job_dependencies:
                        if isinstance(job_dependency, Job) and re.match(
                            dependency, job_dependency._name
                        ):
                            match_in_this_iteration = True
                            break
                        elif isinstance(job_dependency, Sequence):
                            for job in job_dependency.nested_jobs:
                                if re.match(dependency, job._name):
                                    match_in_this_iteration = True
                                    break
                            if match_in_this_iteration:
                                break
                    if not match_in_this_iteration:
                        return False

        if self._needs:
            if job._needs is None:
                return False
            else:
                # because the language checker does not recognise we have already checked, that
                # `job._needs` is not None, we need to create a new variable that we use in the
                # following code. The language checker accepts `job_needs` as not None
                job_needs = job._needs

            # late import to avoid circular dependencies
            from .sequence import Sequence

            for need in self._needs:
                if (
                    isinstance(need, Job) or isinstance(need, Sequence)
                ) and need not in job_needs:
                    return False

                match_in_this_iteration = False
                if isinstance(need, Need):
                    for job_need in job_needs:
                        if isinstance(job_need, Need) and need._equals(job_need):
                            match_in_this_iteration = True
                            break
                    if not match_in_this_iteration:
                        return False

                match_in_this_iteration = False
                if isinstance(need, str):
                    for job_need in job_needs:
                        if (
                            isinstance(job_need, Need)
                            and job_need._job
                            and re.match(need, job_need._job)
                        ):
                            match_in_this_iteration = True
                            break
                        elif isinstance(job_need, Job) and re.match(
                            need, job_need._name
                        ):
                            match_in_this_iteration = True
                            break
                        elif isinstance(job_need, Sequence):
                            for job in job_need.last_jobs_executed:
                                if re.match(need, job._name):
                                    match_in_this_iteration = True
                                    break
                            if match_in_this_iteration:
                                break
                    if not match_in_this_iteration:
                        return False

        if self._artifacts:
            if isinstance(self._artifacts, Artifacts):
                if not self._artifacts._equals(job._artifacts):
                    return False
            elif isinstance(self._artifacts, list):
                for artifact in self._artifacts:
                    match_in_this_iteration = False
                    for job_artifact_path in job.artifacts._paths:
                        if re.match(artifact, job_artifact_path):
                            match_in_this_iteration = True
                            break

                    if not match_in_this_iteration:
                        return False

        if self._cache:
            if not job._cache:
                return False
            if isinstance(self._cache, Cache):
                if not self._cache._equals(job._cache):
                    return False
            elif isinstance(self._cache, list):
                for regex in self._cache:
                    match_in_this_iteration = False
                    for path in job._cache._paths:
                        if re.match(regex, path):
                            match_in_this_iteration = True
                            break
                    if not match_in_this_iteration:
                        return False

        if self._when:
            if not job._when:
                return False
            if not self._when == job._when:
                return False

        if self._timeout:
            if not job._timeout:
                return False
            if self._timeout != job._timeout:
                return False

        if self._resource_group:
            if not job._resource_group:
                return False
            if self._resource_group != job._resource_group:
                return False

        if self._environment:
            if not job._environment:
                return False
            elif isinstance(
                self._environment, Environment
            ) and not self._environment._equals(job._environment):
                return False
            elif isinstance(self._environment, str) and not re.match(
                self._environment, str(job._environment.render()["name"])
            ):
                return False

        if self._retry:
            if not job._retry:
                return False
            elif isinstance(self._retry, Retry) and not self._retry._equals(job._retry):
                return False
            elif isinstance(self._retry, int) and self._retry != job._retry.max:
                return False

        return True

Methods

def match(self, job: Job) ‑> bool
class JobNameConflictError (job: Job)

This exception is used by the Pipeline when two rendered jobs have the same name.

When two or more jobs have the same name within a pipeline means that one job will overwrite all those other jobs. This is absolutely nonsense and could (nearly?) never be the intention of the user, so he must be informed about that exception.

Attributes

job : Job
A Job whose name equals to another job already added to the rendered pipeline.
Expand source code
class JobNameConflictError(Exception):
    """This exception is used by the `Pipeline` when two rendered jobs have the same name.

    When two or more jobs have the same name within a pipeline means that one job will overwrite
    all those other jobs. This is absolutely nonsense and could (nearly?) never be the intention of
    the user, so he must be informed about that exception.

    Attributes:
        job (Job): A `gcip.core.job.Job` whose name equals to another job already added to the rendered pipeline.
    """

    def __init__(self, job: Job):
        super().__init__(
            f"Two jobs have the same name '{job.name}' when rendering the pipeline."
            "\nPlease fix this by providing a different name and/or stage when adding those jobs to"
            " their sequences/pipeline."
        )

Ancestors

  • builtins.Exception
  • builtins.BaseException
class Need (job: Optional[str] = None, *, project: Optional[str] = None, ref: Optional[str] = None, pipeline: Optional[str] = None, artifacts: bool = True)

This class represents the Gitlab CI needs keyword.

The needs key-word adds a possibility to allow out-of-order Gitlab CI jobs. A job which needed another job runs directly after the other job as finished successfully.

Args

job : Optional[str]
The name of the job to depend on. Could be left is pipeline is set. Defaults to None which requires pipeline to be set.
project : Optional[str]
If the job resides in another pipeline you have to give its project name here. Defaults to None.
ref : Optional[str]
Branch of the remote project to depend on. Defaults to None.
pipeline : Optional[str]
When $CI_PIPELINE_ID of another pipeline is provided, then artifacts from this pipeline were downloaded. When the name of an other/project is provided, then the status of an upstream pipeline is mirrored. Defaults to None, which requires job to be set.
artifacts : bool
Download artifacts from the job to depend on. Defaults to True.

Raises

ValueError
If neither job nor pipeline is set.
ValueError
If ref is set but project is missing.
ValueError
If pipeline equals the CI_PIPELINE_ID of the own project.
ValueError
If both project and pipeline are set.
Expand source code
class Need(object):
    def __init__(
        self,
        job: Optional[str] = None,
        *,
        project: Optional[str] = None,
        ref: Optional[str] = None,
        pipeline: Optional[str] = None,
        artifacts: bool = True,
    ):
        """This class represents the Gitlab CI [needs](https://docs.gitlab.com/ee/ci/yaml/#needs) keyword.

        The `needs` key-word adds a possibility to allow out-of-order Gitlab CI jobs.
        A job which needed another job runs directly after the other job as finished successfully.

        Args:
            job (Optional[str]): The name of the job to depend on. Could be left is `pipeline` is set. Defaults to None which requires
                `pipeline` to be set.
            project (Optional[str]): If the `job` resides in another pipeline you have to give its project name here. Defaults to None.
            ref (Optional[str]): Branch of the remote project to depend on. Defaults to None.
            pipeline (Optional[str]): When $CI_PIPELINE_ID of another pipeline is provided, then artifacts from this
                pipeline were downloaded. When the name of an `other/project` is provided, then the status of an
                upstream pipeline is mirrored. Defaults to None, which requires `job` to be set.
            artifacts (bool): Download artifacts from the `job` to depend on. Defaults to True.

        Raises:
            ValueError: If neither `job` nor `pipeline` is set.
            ValueError: If `ref` is set but `project` is missing.
            ValueError: If `pipeline` equals the CI_PIPELINE_ID of the own project.
            ValueError: If both `project` and `pipeline` are set.
        """
        if not job and not pipeline:
            raise ValueError("At least one of `job` or `pipeline` must be set.")

        if ref and not project:
            raise ValueError("'ref' parameter requires the 'project' parameter.")

        if project and pipeline:
            raise ValueError(
                "Needs accepts either `project` or `pipeline` but not both."
            )

        if pipeline and pipeline == PredefinedVariables.CI_PIPELINE_ID:
            raise ValueError(
                "The pipeline attribute does not accept the current pipeline ($CI_PIPELINE_ID). "
                "To download artifacts from a job in the current pipeline, use the basic form of needs."
            )

        self._job = job
        self._project = project
        self._ref = ref
        self._artifacts = artifacts
        self._pipeline = pipeline

        if self._project and not self._ref:
            self._ref = "main"

    def render(self) -> Dict[str, Union[str, bool]]:
        """Return a representation of this Need object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Any]: A dictionary representing the need object in Gitlab CI.
        """

        rendered_need: Dict[str, Union[str, bool]] = {}

        if self._job:
            rendered_need.update(
                {
                    "job": self._job,
                    "artifacts": self._artifacts,
                }
            )

        if self._project and self._ref:
            rendered_need.update({"project": self._project, "ref": self._ref})

        if self._pipeline:
            rendered_need["pipeline"] = self._pipeline

        return rendered_need

    def _equals(self, need: Optional[Need]) -> bool:
        """
        Returns:
            bool: True if self equals to `need`.
        """
        if not need:
            return False

        return self.render() == need.render()

Methods

def render(self) ‑> Dict[str, Union[bool, str]]

Return a representation of this Need object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Any]
A dictionary representing the need object in Gitlab CI.
class PagesJob

This class represents the Gitlab CI Job

Attributes

script : Union[AnyStr, List[str]]
The script(s) to be executed.
name : Optional[str]
The name of the job. In opposite to stage only the name is set and not the stage of the job. If name is set, than the jobs stage has no value, which defaults to the 'test' stage. Either name or stage must be set. Defaults to None.
stage : Optional[str]
The name and stage of the job. In opposite to name also the jobs stage will be setup with this value. Either name or stage must be set. Defaults to None.
allow_failure : Optional[bool]
The allow_failure keyword of the Job. Defaults to None (unset).

This is a special kind of jobs which deploys Gitlab Pages.

This job has the static name pages and the static artifacts path ./public. Both preconfigurations can't be altered and are required for deploying Gitlab Pages properly. All methods which would typically alter the name, stage and artifacts of a job are overwritten with an empty implementation.

This job is only for deploying Gitlab Pages artifacts within the ./public artifacts path. To create the artifacts you have to run jobs, that generate those artifacts within the same ./public artifacts path, before this PagesJob in the pipeline.

Because the name of the job can't be altered, this job may only exist once in the generated pipeline output. Typically you should add the PagesJob to the Pipeline.

The PagesJob is also preconfigured with the stage pages and the image alpine:latest. To change the stage of this job, use the set_stage() method. Please mention to run this job in a stage after all jobs, that fill the public artifacts path with content.

Here a simple example how to use the GitlabPages job:

pipeline = Pipeline()
pipeline.add_children(
    Job(stage="deploy", script="./create-html.sh").add_artifacts_paths("public"),
    PagesJob(),
)
Expand source code
class PagesJob(Job):
    def __init__(self) -> None:
        """
        This is a special kind of jobs which deploys Gitlab Pages.

        This job has the static name `pages` and the static artifacts path `./public`. Both preconfigurations
        can't be altered and are required for deploying Gitlab Pages properly. All methods which would typically
        alter the name, stage and artifacts of a job are overwritten with an empty implementation.

        This job is only for deploying Gitlab Pages artifacts within the `./public` artifacts path. To create the
        artifacts you have to run jobs, that generate those artifacts within the same `./public` artifacts path,
        before this PagesJob in the pipeline.

        Because the name of the job can't be altered, this job may only exist once in the generated pipeline output.
        Typically you should add the PagesJob to the `gcip.core.pipeline.Pipeline`.

        The PagesJob is also preconfigured with the stage `pages` and the image `alpine:latest`. To change the stage
        of this job, use the `set_stage()` method. Please mention to run this job in a stage after all jobs, that
        fill the `public` artifacts path with content.

        Here a simple example how to use the GitlabPages job:

        ```
        pipeline = Pipeline()
        pipeline.add_children(
            Job(stage="deploy", script="./create-html.sh").add_artifacts_paths("public"),
            PagesJob(),
        )
        ```
        """
        super().__init__(stage="pages", script="echo 'Publishing Gitlab Pages'")
        self._name = "pages"
        super().artifacts.add_paths("public")
        super().set_image("busybox:latest")

    def set_stage(self, stage: str) -> PagesJob:
        """Set the name of this jobs stage to a value other than `pages`.

        Args:
            stage (str): A valid Gitlab CI Job stage name.

        Returns:
            PagesJob: The modified PagesJob object.
        """
        self._stage = stage
        return self

    def _extend_name(self, name: Optional[str]) -> None:
        """
        The jobs name `pages` is fixed and can't be altered.
        """

    def _extend_stage(self, stage: Optional[str]) -> None:
        """
        The stage name can't be altered from parent sequences.
        """

    def _extend_stage_value(self, stage: Optional[str]) -> None:
        pass

    def _get_all_instance_names(self) -> Set[str]:
        """
        There should be only one instance of the job with the name `pages`.

        Returns:
            Set[str]: `set("pages")`
        """
        return set(self._name)

    def _copy(self) -> Job:
        """
        There should be only one instance of this job, that is why this method
        does not return a copy of this job but the job itself.
        """
        return self

    def add_artifacts_paths(self, *paths: str) -> Job:
        """
        This job does not accept further artifact paths than `./public` and thus
        ignores this call.
        """
        return self

Ancestors

Methods

def add_artifacts_paths(self, *paths: str) ‑> Job

This job does not accept further artifact paths than ./public and thus ignores this call.

def set_stage(self, stage: str) ‑> PagesJob

Set the name of this jobs stage to a value other than pages.

Args

stage : str
A valid Gitlab CI Job stage name.

Returns

PagesJob
The modified PagesJob object.

Inherited members

class Pipeline (*, includes: Optional[Union[Include, List[Include]]] = None)

A Sequence collects multiple Jobs and/or other Sequences into a group.

A Pipeline is the uppermost container of Jobs and Sequences.

A Pipeline is a Sequence itself but has the additional method Pipeline.write_yaml(). This method is responsible for writing the whole Gitlab CI pipeline to a YAML file which could then feed the dynamic child pipeline.

Args

includes : Optional[Union[Include, List[Include]]]
You can add global Includes to the pipeline. Gitlab CI Documentation: "Use include to include external YAML files in your CI/CD configuration." Defaults to None.

Raises

ValueError
If includes is not of type Include or list of Includes
Expand source code
class Pipeline(Sequence):
    def __init__(self, *, includes: Optional[Union[Include, List[Include]]] = None):
        """A Pipeline is the uppermost container of `gcip.core.job.Job`s and `gcip.core.sequence.Sequence`s.

        A Pipeline is a `gcip.core.sequence.Sequence` itself but has the additional method `Pipeline.write_yaml()`.
        This method is responsible for writing the whole Gitlab CI pipeline to a YAML file which could then feed
        the dynamic child pipeline.

        Args:
            includes (Optional[Union[Include, List[Include]]]): You can add global `gcip.core.include.Include`s to the pipeline.
                [Gitlab CI Documentation](https://docs.gitlab.com/ee/ci/yaml/#include): _"Use include to include external YAML files
                in your CI/CD configuration."_ Defaults to None.

        Raises:
            ValueError: If `includes` is not of type `Include` or `list` of `Includes`
        """
        self._services: List[Service] = list()

        if not includes:
            self._includes = []
        elif isinstance(includes, Include):
            self._includes = [includes]
        elif isinstance(includes, list):
            self._includes = includes
        else:
            raise ValueError(
                "Parameter include must of type gcip.Include or List[gcip.Include]"
            )
        super().__init__()

    def add_services(self, *services: Union[str, Service]) -> Pipeline:
        """Add one or more `gcip.core.service.Service`s to the pipeline.

        Gitlab CI Documentation: _"The services keyword defines a Docker image that runs during a job linked to the Docker image
        that the image keyword defines."_

        Args:
            services (Union[str, Service]): Simply use strings to name the services to link to the pipeline.
                Use objects of the `gcip.core.service.Service` class for more complex service configurations.

        Returns:
            `Pipeline`: The modified `Pipeline` object.
        """
        for service in services:
            if isinstance(service, str):
                service = Service(service)
            self._services.append(service)
        return self

    def add_include(self, include: Include) -> Pipeline:
        """Let you add global `gcip.core.include.Include`s to the pipeline.
        [Gitlab CI Documentation](https://docs.gitlab.com/ee/ci/yaml/#include): _"Use include to include external YAML files
        in your CI/CD configuration."_

        Returns:
            `Pipeline`: The modified `Pipeline` object.
        """
        self._includes.append(include)
        return self

    def add_children(
        self,
        *jobs_or_sequences: Union[Job, Sequence],
        stage: Optional[str] = None,
        name: Optional[str] = None,
    ) -> Pipeline:
        """
        Just calls `super().add_children()` but returns self as type `Pipeline`.

        See `gcip.core.sequence.Sequence.add_children()`
        """
        super().add_children(*jobs_or_sequences, stage=stage, name=name)
        return self

    def render(self) -> Dict[str, Any]:
        """Return a representation of this Pipeline object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Return:
            Dict[str, Any]: A dictionary prepresenting the pipeline object in Gitlab CI.
        """
        stages: OrderedSetType = {}
        pipeline: Dict[str, Any] = {}
        job_copies = self.populated_jobs

        for job in job_copies:
            # use the keys of dictionary as ordered set
            stages[job.stage] = None

        if self._includes:
            pipeline["include"] = [include.render() for include in self._includes]

        if self._services:
            pipeline["services"] = [service.render() for service in self._services]

        pipeline["stages"] = list(stages.keys())
        for job in job_copies:
            if job.name in pipeline:
                raise JobNameConflictError(job)

            pipeline[job.name] = job.render()
        return pipeline

    def write_yaml(self, filename: str = "generated-config.yml") -> None:
        """
        Create the Gitlab CI YAML file from this pipeline object.

        Use that YAML file to trigger a child pipeline.

        Args:
            filename (str, optional): The file name of the created yaml file. Defaults to "generated-config.yml".
        """
        import yaml

        with open(filename, "w") as generated_config:
            generated_config.write(
                yaml.dump(self.render(), default_flow_style=False, sort_keys=False)
            )

    def __enter__(self) -> Pipeline:
        return self

    def __exit__(
        self,
        exc_type: Optional[Any],
        exc_value: Optional[Any],
        exc_traceback: Optional[Any],
    ) -> None:
        self.write_yaml()

Ancestors

Methods

def add_children(self, *jobs_or_sequences: Union[JobSequence], stage: Optional[str] = None, name: Optional[str] = None) ‑> Pipeline

Just calls super().add_children() but returns self as type Pipeline.

See Sequence.add_children()

def add_include(self, include: Include) ‑> Pipeline

Let you add global Includes to the pipeline. Gitlab CI Documentation: "Use include to include external YAML files in your CI/CD configuration."

Returns

Pipeline: The modified Pipeline object.

def add_services(self, *services: Union[str, Service]) ‑> Pipeline

Add one or more Services to the pipeline.

Gitlab CI Documentation: "The services keyword defines a Docker image that runs during a job linked to the Docker image that the image keyword defines."

Args

services : Union[str, Service]
Simply use strings to name the services to link to the pipeline. Use objects of the Service class for more complex service configurations.

Returns

Pipeline: The modified Pipeline object.

def render(self) ‑> Dict[str, Any]

Return a representation of this Pipeline object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Return

Dict[str, Any]: A dictionary prepresenting the pipeline object in Gitlab CI.

def write_yaml(self, filename: str = 'generated-config.yml') ‑> None

Create the Gitlab CI YAML file from this pipeline object.

Use that YAML file to trigger a child pipeline.

Args

filename : str, optional
The file name of the created yaml file. Defaults to "generated-config.yml".

Inherited members

class PredefinedVariables

This class contains constants for Gitlab CI predefined variables

Expand source code
class PredefinedVariables:
    """This class contains constants for [Gitlab CI predefined variables](https://docs.gitlab.com/ee/ci/variables/predefined_variables.html)"""

    CHAT_CHANNEL: EnvProxy = EnvProxy("CHAT_CHANNEL")
    """
    Source chat channel which triggered the ChatOps command.

    Added in GitLab 10.6
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CHAT_INPUT: EnvProxy = EnvProxy("CHAT_INPUT")
    """
    Additional arguments passed in the ChatOps command.

    Added in GitLab 10.6
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI: EnvProxy = EnvProxy("CI")
    """
    Mark that job is executed in CI environment.

    Added in GitLab all
    Available in GitLab Runner 0.4

    Raises:
        KeyError: If environment variable not available.
    """

    CI_API_V4_URL: EnvProxy = EnvProxy("CI_API_V4_URL")
    """
    The GitLab API v4 root URL.

    Added in GitLab 11.7
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_BUILDS_DIR: EnvProxy = EnvProxy("CI_BUILDS_DIR")
    """
    Top-level directory where builds are executed.

    Added in GitLab all
    Available in GitLab Runner 11.10

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_BEFORE_SHA: EnvProxy = EnvProxy("CI_COMMIT_BEFORE_SHA")
    """
    The previous latest commit present on a branch. Is always
    0000000000000000000000000000000000000000 in pipelines for merge requests.

    Added in GitLab 11.2
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_DESCRIPTION: EnvProxy = EnvProxy("CI_COMMIT_DESCRIPTION")
    """
    The description of the commit the message without first line,
    if the title is shorter than 100 characters; full message in other case.

    Added in GitLab 10.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.

    """

    CI_COMMIT_MESSAGE: EnvProxy = EnvProxy("CI_COMMIT_MESSAGE")
    """
    The full commit message.

    Added in GitLab 10.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_REF_NAME: EnvProxy = EnvProxy("CI_COMMIT_REF_NAME")
    """
    The branch or tag name for which project is built.

    Added in GitLab 9.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_REF_PROTECTED: EnvProxy = EnvProxy("CI_COMMIT_REF_PROTECTED")
    """
    true if the job is running on a protected reference, false if not.

    Added in GitLab 11.11
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_REF_SLUG: EnvProxy = EnvProxy("CI_COMMIT_REF_SLUG")
    """
    $CI_COMMIT_REF_NAME in lowercase, shortened to 63 bytes,
    and with everything except 0-9 and a-z replaced with -.
    No leading / trailing -. Use in URLs, host names and domain names.

    Added in GitLab 9.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_SHA: EnvProxy = EnvProxy("CI_COMMIT_SHA")
    """
    The commit revision for which project is built.

    Added in GitLab 9.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_SHORT_SHA: EnvProxy = EnvProxy("CI_COMMIT_SHORT_SHA")
    """
    The first eight characters of CI_COMMIT_SHA.

    Added in GitLab 11.7
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_BRANCH: OptionalEnvProxy = OptionalEnvProxy("CI_COMMIT_BRANCH")
    """
    The commit branch name. Present in branch pipelines,
    including pipelines for the default branch.
    Not present in merge request pipelines or tag pipelines.

    Added in GitLab 12.6
    Available in GitLab Runner 0.5
    """

    CI_COMMIT_TAG: OptionalEnvProxy = OptionalEnvProxy("CI_COMMIT_TAG")
    """
    The commit tag name. Present only when building tags.

    Added in GitLab 9.0
    Available in GitLab Runner 0.5
    """

    CI_COMMIT_TAG_MESSAGE: OptionalEnvProxy = OptionalEnvProxy("CI_COMMIT_TAG_MESSAGE")
    """
    The commit tag message. Available only in pipelines for tags.

    Added in Gitlab 15.5
    Available in GitLab Runner all
    """

    CI_COMMIT_TITLE: EnvProxy = EnvProxy("CI_COMMIT_TITLE")
    """
    The title of the commit - the full first line of the message.

    Added in GitLab 10.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_COMMIT_TIMESTAMP: EnvProxy = EnvProxy("CI_COMMIT_TIMESTAMP")
    """
    The timestamp of the commit in the ISO 8601 format.

    Added in GitLab 13.4
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_CONCURRENT_ID: EnvProxy = EnvProxy("CI_CONCURRENT_ID")
    """
    Unique ID of build execution in a single executor.

    Added in GitLab all
    Available in GitLab Runner 11.10

    Raises:
        KeyError: If environment variable not available.
    """

    CI_CONCURRENT_PROJECT_ID: EnvProxy = EnvProxy("CI_CONCURRENT_PROJECT_ID")
    """
    Unique ID of build execution in a single executor and project.

    Added in GitLab all
    Available in GitLab Runner 11.10

    Raises:
        KeyError: If environment variable not available.
    """

    CI_CONFIG_PATH: EnvProxy = EnvProxy("CI_CONFIG_PATH")
    """
    The path to CI configuration file. Defaults to .gitlab-ci.yml.

    Added in GitLab 9.4
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEBUG_TRACE: EnvProxy = EnvProxy("CI_DEBUG_TRACE")
    """
    Whether debug logging (tracing) is enabled.

    Added in GitLab all
    Available in GitLab Runner 1.7

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEFAULT_BRANCH: EnvProxy = EnvProxy("CI_DEFAULT_BRANCH")
    """
    The name of the default branch for the project.

    Added in GitLab 12.4
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX: EnvProxy = EnvProxy(
        "CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX"
    )
    """
    The image prefix for pulling images through the Dependency Proxy.

    Added in GitLab 13.7
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEPENDENCY_PROXY_SERVER: EnvProxy = EnvProxy("CI_DEPENDENCY_PROXY_SERVER")
    """
    The server for logging in to the Dependency Proxy. This is equivalent to $CI_SERVER_HOST:$CI_SERVER_PORT.

    Added in GitLab 13.7
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEPENDENCY_PROXY_PASSWORD: str = "${CI_DEPENDENCY_PROXY_PASSWORD}"
    """
    The password to use to pull images through the Dependency Proxy.

    ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering
    time. Instead the variable string is returned, which is then resolved during pipeline execution.
    This is because the value contains sensitive information.

    Added in GitLab 13.7
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEPENDENCY_PROXY_USER: EnvProxy = EnvProxy("CI_DEPENDENCY_PROXY_USER")
    """
    The username to use to pull images through the Dependency Proxy.

    Added in GitLab 13.7
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEPLOY_FREEZE: OptionalEnvProxy = OptionalEnvProxy("CI_DEPLOY_FREEZE")
    """
    Included with the value true if the pipeline runs during a deploy freeze window.

    Added in GitLab 13.2
    Available in GitLab Runner all
    """

    CI_DEPLOY_PASSWORD: str = "${CI_DEPLOY_PASSWORD}"
    """
    Authentication password of the GitLab Deploy Token,
    only present if the Project has one related.

    ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering
    time. Instead the variable string is returned, which is then resolved during pipeline execution.
    This is because the value contains sensitive information.

    Added in GitLab 10.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DEPLOY_USER: EnvProxy = EnvProxy("CI_DEPLOY_USER")
    """
    Authentication username of the GitLab Deploy Token,
    only present if the Project has one related.

    Added in GitLab 10.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_DISPOSABLE_ENVIRONMENT: OptionalEnvProxy = OptionalEnvProxy(
        "CI_DISPOSABLE_ENVIRONMENT"
    )
    """
    Marks that the job is executed in a disposable environment
    (something that is created only for this job and disposed of/destroyed
    after the execution - all executors except shell and ssh).
    If the environment is disposable, it is set to true,
    otherwise it is not defined at all.

    Added in GitLab all
    Available in GitLab Runner 10.1
    """

    CI_ENVIRONMENT_NAME: OptionalEnvProxy = OptionalEnvProxy("CI_ENVIRONMENT_NAME")
    """
    The name of the environment for this job.
    Only present if environment:name is set.

    Added in GitLab 8.15
    Available in GitLab Runner all
    """

    CI_ENVIRONMENT_SLUG: OptionalEnvProxy = OptionalEnvProxy("CI_ENVIRONMENT_SLUG")
    """
    A simplified version of the environment name,
    suitable for inclusion in DNS, URLs, Kubernetes labels, and so on.
    Only present if environment:name is set.

    Added in GitLab 8.15
    Available in GitLab Runner all
    """

    CI_ENVIRONMENT_URL: OptionalEnvProxy = OptionalEnvProxy("CI_ENVIRONMENT_URL")
    """
    The URL of the environment for this job.
    Only present if environment:url is set.

    Added in GitLab 9.3
    Available in GitLab Runner all
    """

    CI_EXTERNAL_PULL_REQUEST_IID: OptionalEnvProxy = OptionalEnvProxy(
        "CI_EXTERNAL_PULL_REQUEST_IID"
    )
    """
    Pull Request ID from GitHub if the pipelines are for
    external pull requests.
    Available only if only [external_pull_requests] or
    rules syntax is used and the pull request is open.

    Added in GitLab 12.3
    Available in GitLab Runner all

    """

    CI_EXTERNAL_PULL_REQUEST_SOURCE_REPOSITORY: OptionalEnvProxy = OptionalEnvProxy(
        "CI_EXTERNAL_PULL_REQUEST_SOURCE_REPOSITORY"
    )
    """
    The source repository name of the pull request if the pipelines are
    for external pull requests. Available only if only
    [external_pull_requests] or rules syntax is used and
    the pull request is open.

    Added in GitLab 13.3
    Available in GitLab Runner all

    """

    CI_EXTERNAL_PULL_REQUEST_TARGET_REPOSITORY: OptionalEnvProxy = OptionalEnvProxy(
        "CI_EXTERNAL_PULL_REQUEST_TARGET_REPOSITORY"
    )
    """
    The target repository name of the pull request if the pipelines
    are for external pull requests. Available only if only
    [external_pull_requests] or rules syntax is used and the pull
    request is open.

    Added in GitLab 13.3
    Available in GitLab Runner all

    """

    CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME: OptionalEnvProxy = OptionalEnvProxy(
        "CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME"
    )
    """
    The source branch name of the pull request if the pipelines are for
    external pull requests. Available only if only [external_pull_requests]
    or rules syntax is used and the pull request is open.

    Added in GitLab 12.3
    Available in GitLab Runner all

    """

    CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_SHA: OptionalEnvProxy = OptionalEnvProxy(
        "CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_SHA"
    )
    """
    The HEAD SHA of the source branch of the pull request if the pipelines
    are for external pull requests. Available only if only
    [external_pull_requests] or rules syntax is used and the pull
    request is open.

    Added in GitLab 12.3
    Available in GitLab Runner all

    """

    CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_NAME: OptionalEnvProxy = OptionalEnvProxy(
        "CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_NAME"
    )
    """
    The target branch name of the pull request if the pipelines are for
    external pull requests. Available only if only [external_pull_requests]
    or rules syntax is used and the pull request is open.

    Added in GitLab 12.3
    Available in GitLab Runner all

    """

    CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_SHA: OptionalEnvProxy = OptionalEnvProxy(
        "CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_SHA"
    )
    """
    The HEAD SHA of the target branch of the pull request if the pipelines
    are for external pull requests. Available only if only
    [external_pull_requests] or rules syntax is used and the pull
    request is open.

    Added in GitLab 12.3
    Available in GitLab Runner all

    """

    CI_HAS_OPEN_REQUIREMENTS: OptionalEnvProxy = OptionalEnvProxy(
        "CI_HAS_OPEN_REQUIREMENTS"
    )
    """
    Included with the value true only if the pipeline’s project has any
    open requirements. Not included if there are no open requirements for
    the pipeline’s project.

    Added in GitLab 13.1
    Available in GitLab Runner all
    """

    CI_OPEN_MERGE_REQUESTS: OptionalEnvProxy = OptionalEnvProxy(
        "CI_OPEN_MERGE_REQUESTS"
    )
    """
    Available in branch and merge request pipelines. Contains a
    comma-separated list of up to four merge requests that use the current
    branch and project as the merge request source.
    For example gitlab-org/gitlab!333,gitlab-org/gitlab-foss!11.

    Added in GitLab 13.8
    Available in GitLab Runner all
    """

    CI_JOB_ID: EnvProxy = EnvProxy("CI_JOB_ID")
    """
    The unique ID of the current job that GitLab CI/CD uses internally.

    Added in GitLab 9.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.
    """

    CI_JOB_IMAGE: EnvProxy = EnvProxy("CI_JOB_IMAGE")
    """
    The name of the image running the CI job.

    Added in GitLab 12.9
    Available in GitLab Runner 12.9

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.
    """

    CI_JOB_MANUAL: EnvProxy = EnvProxy("CI_JOB_MANUAL")
    """
    The flag to indicate that job was manually started.

    Added in GitLab 8.12
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.
    """

    CI_JOB_NAME: EnvProxy = EnvProxy("CI_JOB_NAME")
    """
    The name of the job as defined in .gitlab-ci.yml.

    Added in GitLab 9.0
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.
    """

    CI_JOB_STAGE: EnvProxy = EnvProxy("CI_JOB_STAGE")
    """
    The name of the stage as defined in .gitlab-ci.yml.

    Added in GitLab 9.0
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.
    """

    CI_JOB_STATUS: EnvProxy = EnvProxy("CI_JOB_STATUS")
    """
    The state of the job as each runner stage is executed.
    Use with after_script where CI_JOB_STATUS can be either success,
    failed or canceled.

    Added in GitLab all
    Available in GitLab Runner 13.5

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.

    """

    CI_JOB_TOKEN: str = "${CI_JOB_TOKEN}"
    """
    Token used for authenticating with a few API endpoints and downloading
    dependent repositories. The token is valid as long as the job is running.

    ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering
    time. Instead the variable string is returned, which is then resolved during pipeline execution.
    This is because the value contains sensitive information.

    Added in GitLab 9.0
    Available in GitLab Runner 1.2

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.
    """

    CI_JOB_JWT: str = "${CI_JOB_JWT}"
    """
    RS256 JSON web token that can be used for authenticating with third
    party systems that support JWT authentication, for example HashiCorp’s Vault.

    ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering
    time. Instead the variable string is returned, which is then resolved during pipeline execution.
    This is because the value contains sensitive information.

    Added in GitLab 12.10
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.

    Raises:
        KeyError: If environment variable not available.
    """

    CI_JOB_URL: EnvProxy = EnvProxy("CI_JOB_URL")
    """
    Job details URL.

    Added in GitLab 11.1
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_KUBERNETES_ACTIVE: OptionalEnvProxy = OptionalEnvProxy("CI_KUBERNETES_ACTIVE")
    """
    Included with the value true only if the pipeline has a Kubernetes
    cluster available for deployments. Not included if no cluster is available.
    Can be used as an alternative to only:kubernetes/except:kubernetes
    with rules:if.

    Added in GitLab 13.0
    Available in GitLab Runner all
    """

    CI_MERGE_REQUEST_ASSIGNEES: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_ASSIGNEES"
    )
    """
    Comma-separated list of username(s) of assignee(s) for the merge request
    if the pipelines are for merge requests.
    Available only if only [merge_requests] or rules syntax is used and the
    merge request is created.

    Added in GitLab 11.9
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_ID: OptionalEnvProxy = OptionalEnvProxy("CI_MERGE_REQUEST_ID")
    """
    The instance-level ID of the merge request. Only available if the
    pipelines are for merge requests and the merge request is created.
    This is a unique ID across all projects on GitLab.

    Added in GitLab 11.6
    Available in GitLab Runner all
    """

    CI_MERGE_REQUEST_IID: OptionalEnvProxy = OptionalEnvProxy("CI_MERGE_REQUEST_IID")
    """
    The project-level IID (internal ID) of the merge request.
    Only available If the pipelines are for merge requests and the merge
    request is created. This ID is unique for the current project.

    Added in GitLab 11.6
    Available in GitLab Runner all
    """

    CI_MERGE_REQUEST_LABELS: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_LABELS"
    )
    """
    Comma-separated label names of the merge request if the pipelines are
    for merge requests. Available only if only [merge_requests] or rules
    syntax is used and the merge request is created.

    Added in GitLab 11.9
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_MILESTONE: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_MILESTONE"
    )
    """
    The milestone title of the merge request if the pipelines are for merge
    requests. Available only if only [merge_requests] or rules syntax is
    used and the merge request is created.

    Added in GitLab 11.9
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_PROJECT_ID: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_PROJECT_ID"
    )
    """
    The ID of the project of the merge request if the pipelines are for
    merge requests. Available only if only [merge_requests] or rules syntax
    is used and the merge request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_PROJECT_PATH: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_PROJECT_PATH"
    )
    """
    The path of the project of the merge request if the pipelines are for
    merge requests (for example stage/awesome-project). Available only
    if only [merge_requests] or rules syntax is used and the merge request
    is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_PROJECT_URL: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_PROJECT_URL"
    )
    """
    The URL of the project of the merge request if the pipelines are for
    merge requests (for example http://192.168.10.15:3000/stage/awesome-project).
    Available only if only [merge_requests] or rules syntax is used and the merge
    request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_REF_PATH: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_REF_PATH"
    )
    """
    The ref path of the merge request if the pipelines are for merge requests.
    (for example refs/merge-requests/1/head). Available only if only
    [merge_requests] or rules syntax is used and the merge request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_SOURCE_BRANCH_NAME: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_SOURCE_BRANCH_NAME"
    )
    """
    The source branch name of the merge request if the pipelines are for
    merge requests. Available only if only [merge_requests] or rules syntax
    is used and the merge request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_SOURCE_BRANCH_SHA: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_SOURCE_BRANCH_SHA"
    )
    """
    The HEAD SHA of the source branch of the merge request if the pipelines
    are for merge requests. Available only if only [merge_requests] or rules
    syntax is used, the merge request is created, and the pipeline is a
    merged result pipeline.

    Added in GitLab 11.9
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_SOURCE_PROJECT_ID: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_SOURCE_PROJECT_ID"
    )
    """
    The ID of the source project of the merge request if the pipelines are
    for merge requests. Available only if only [merge_requests] or rules
    syntax is used and the merge request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_SOURCE_PROJECT_PATH: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_SOURCE_PROJECT_PATH"
    )
    """
    The path of the source project of the merge request if the pipelines
    are for merge requests. Available only if only [merge_requests] or
    rules syntax is used and the merge request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_SOURCE_PROJECT_URL: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_SOURCE_PROJECT_URL"
    )
    """
    The URL of the source project of the merge request if the pipelines are
    for merge requests. Available only if only [merge_requests] or rules
    syntax is used and the merge request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_TARGET_BRANCH_NAME: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_TARGET_BRANCH_NAME"
    )
    """
    The target branch name of the merge request if the pipelines are for
    merge requests. Available only if only [merge_requests] or rules syntax
    is used and the merge request is created.

    Added in GitLab 11.6
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_TARGET_BRANCH_SHA: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_TARGET_BRANCH_SHA"
    )
    """
    The HEAD SHA of the target branch of the merge request if the pipelines
    are for merge requests. Available only if only [merge_requests] or rules
    syntax is used, the merge request is created, and the pipeline is a merged
    result pipeline.

    Added in GitLab 11.9
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_TITLE: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_TITLE"
    )
    """
    The title of the merge request if the pipelines are for merge requests.
    Available only if only [merge_requests] or rules syntax is used and the
    merge request is created.

    Added in GitLab 11.9
    Available in GitLab Runner all

    """

    CI_MERGE_REQUEST_EVENT_TYPE: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_EVENT_TYPE"
    )
    """
    The event type of the merge request, if the pipelines are for merge requests.
    Can be detached, merged_result or merge_train.

    Added in GitLab 12.3
    Available in GitLab Runner all
    """

    CI_MERGE_REQUEST_DIFF_ID: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_DIFF_ID"
    )
    """
    The version of the merge request diff, if the pipelines are for merge requests.

    Added in GitLab 13.7
    Available in GitLab Runner all
    """

    CI_MERGE_REQUEST_DIFF_BASE_SHA: OptionalEnvProxy = OptionalEnvProxy(
        "CI_MERGE_REQUEST_DIFF_BASE_SHA"
    )
    """
    The base SHA of the merge request diff, if the pipelines are for merge requests.

    Added in GitLab 13.7
    Available in GitLab Runner all
    """

    CI_NODE_INDEX: OptionalEnvProxy = OptionalEnvProxy("CI_NODE_INDEX")
    """
    Index of the job in the job set. If the job is not parallelized, this variable is not set.

    Added in GitLab 11.5
    Available in GitLab Runner all
    """

    CI_NODE_TOTAL: EnvProxy = EnvProxy("CI_NODE_TOTAL")
    """
    Total number of instances of this job running in parallel. If the job is not parallelized, this variable is set to 1.

    Added in GitLab 11.5
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PAGES_DOMAIN: EnvProxy = EnvProxy("CI_PAGES_DOMAIN")
    """
    The configured domain that hosts GitLab Pages.

    Added in GitLab 11.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PAGES_URL: EnvProxy = EnvProxy("CI_PAGES_URL")
    """
    URL to GitLab Pages-built pages. Always belongs to a subdomain of CI_PAGES_DOMAIN.

    Added in GitLab 11.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PIPELINE_ID: EnvProxy = EnvProxy("CI_PIPELINE_ID")
    """
    The instance-level ID of the current pipeline. This is a unique ID
    across all projects on GitLab.

    Added in GitLab 8.10
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PIPELINE_IID: EnvProxy = EnvProxy("CI_PIPELINE_IID")
    """
    The project-level IID (internal ID) of the current pipeline.
    This ID is unique for the current project.

    Added in GitLab 11.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PIPELINE_SOURCE: EnvProxy = EnvProxy("CI_PIPELINE_SOURCE")
    """
    Indicates how the pipeline was triggered.
    Possible options are push, web, schedule, api, external, chat, webide,
    merge_request_event, external_pull_request_event, parent_pipeline,
    trigger, or pipeline.
    For pipelines created before GitLab 9.5, this is displayed as unknown.

    Added in GitLab 10.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PIPELINE_TRIGGERED: EnvProxy = EnvProxy("CI_PIPELINE_TRIGGERED")
    """
    The flag to indicate that job was triggered.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PIPELINE_URL: EnvProxy = EnvProxy("CI_PIPELINE_URL")
    """
    Pipeline details URL.

    Added in GitLab 11.1
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_CONFIG_PATH: EnvProxy = EnvProxy("CI_PROJECT_CONFIG_PATH")
    """
    The CI configuration path for the project.

    Added in GitLab 13.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_DIR: EnvProxy = EnvProxy("CI_PROJECT_DIR")
    """
    The full path where the repository is cloned and where the job is run.
    If the GitLab Runner builds_dir parameter is set, this variable is set
    relative to the value of builds_dir. For more information, see Advanced
    configuration for GitLab Runner.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_ID: EnvProxy = EnvProxy("CI_PROJECT_ID")
    """
    The unique ID of the current project that GitLab CI/CD uses internally.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_NAME: EnvProxy = EnvProxy("CI_PROJECT_NAME")
    """
    The name of the directory for the project that is being built.
    For example, if the project URL is gitlab.example.com/group-name/project-1,
    the CI_PROJECT_NAME would be project-1.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_NAMESPACE: EnvProxy = EnvProxy("CI_PROJECT_NAMESPACE")
    """
    The project stage (username or group name) that is being built.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_ROOT_NAMESPACE: EnvProxy = EnvProxy("CI_PROJECT_ROOT_NAMESPACE")
    """
    The root project stage (username or group name) that is being built.
    For example, if CI_PROJECT_NAMESPACE is root-group/child-group/grandchild-group,
    CI_PROJECT_ROOT_NAMESPACE would be root-group.

    Added in GitLab 13.2
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_PATH: EnvProxy = EnvProxy("CI_PROJECT_PATH")
    """
    The stage with project name.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_PATH_SLUG: EnvProxy = EnvProxy("CI_PROJECT_PATH_SLUG")
    """
    $CI_PROJECT_PATH in lowercase and with everything except 0-9 and a-z replaced with -. Use in URLs and domain names.

    Added in GitLab 9.3
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_REPOSITORY_LANGUAGES: EnvProxy = EnvProxy(
        "CI_PROJECT_REPOSITORY_LANGUAGES"
    )
    """
    Comma-separated, lowercase list of the languages used in the repository (for example ruby,javascript,html,css).

    Added in GitLab 12.3
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_TITLE: EnvProxy = EnvProxy("CI_PROJECT_TITLE")
    """
    The human-readable project name as displayed in the GitLab web interface.

    Added in GitLab 12.4
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_URL: EnvProxy = EnvProxy("CI_PROJECT_URL")
    """
    The HTTP(S) address to access project.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_PROJECT_VISIBILITY: EnvProxy = EnvProxy("CI_PROJECT_VISIBILITY")
    """
    The project visibility (internal, private, public).

    Added in GitLab 10.3
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_REGISTRY: OptionalEnvProxy = OptionalEnvProxy("CI_REGISTRY")
    """
    GitLab Container Registry. This variable includes a :port value if one
    has been specified in the registry configuration.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5
    """

    CI_REGISTRY_IMAGE: OptionalEnvProxy = OptionalEnvProxy("CI_REGISTRY_IMAGE")
    """
    the address of the registry tied to the specific project.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_REGISTRY_PASSWORD: str = "${CI_REGISTRY_PASSWORD}"
    """
    The password to use to push containers to the GitLab Container Registry, for the current project.

    ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering
    time. Instead the variable string is returned, which is then resolved during pipeline execution.
    This is because the value contains sensitive information.

    Added in GitLab 9.0
    Available in GitLab Runner all
    """

    CI_REGISTRY_USER: OptionalEnvProxy = OptionalEnvProxy("CI_REGISTRY_USER")
    """
    The username to use to push containers to the GitLab Container Registry, for the current project.

    Added in GitLab 9.0
    Available in GitLab Runner all
    """

    CI_REPOSITORY_URL: str = "${CI_REPOSITORY_URL}"
    """
    The URL to clone the Git repository.

    ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering
    time. Instead the variable string is returned, which is then resolved during pipeline execution.
    This is because the value contains sensitive information.

    Added in GitLab 9.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_RUNNER_DESCRIPTION: EnvProxy = EnvProxy("CI_RUNNER_DESCRIPTION")
    """
    The description of the runner as saved in GitLab.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_RUNNER_EXECUTABLE_ARCH: EnvProxy = EnvProxy("CI_RUNNER_EXECUTABLE_ARCH")
    """
    The OS/architecture of the GitLab Runner executable (note that this is not necessarily the same as the environment of the executor).

    Added in GitLab all
    Available in GitLab Runner 10.6

    Raises:
        KeyError: If environment variable not available.
    """

    CI_RUNNER_ID: EnvProxy = EnvProxy("CI_RUNNER_ID")
    """
    The unique ID of runner being used.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_RUNNER_REVISION: EnvProxy = EnvProxy("CI_RUNNER_REVISION")
    """
    GitLab Runner revision that is executing the current job.

    Added in GitLab all
    Available in GitLab Runner 10.6

    Raises:
        KeyError: If environment variable not available.
    """

    CI_RUNNER_SHORT_TOKEN: str = "${CI_RUNNER_SHORT_TOKEN}"
    """
    First eight characters of the runner’s token used to authenticate new job requests. Used as the runner’s unique ID.

    ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering
    time. Instead the variable string is returned, which is then resolved during pipeline execution.
    This is because the value contains sensitive information.

    Added in GitLab all
    Available in GitLab Runner 12.3

    Raises:
        KeyError: If environment variable not available.
    """

    CI_RUNNER_TAGS: EnvProxy = EnvProxy("CI_RUNNER_TAGS")
    """
    The defined runner tags.

    Added in GitLab 8.10
    Available in GitLab Runner 0.5

    Raises:
        KeyError: If environment variable not available.
    """

    CI_RUNNER_VERSION: EnvProxy = EnvProxy("CI_RUNNER_VERSION")
    """
    GitLab Runner version that is executing the current job.

    Added in GitLab all
    Available in GitLab Runner 10.6

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER: EnvProxy = EnvProxy("CI_SERVER")
    """
    Mark that job is executed in CI environment.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_URL: EnvProxy = EnvProxy("CI_SERVER_URL")
    """
    The base URL of the GitLab instance, including protocol and port (like https://gitlab.example.com:8080).

    Added in GitLab 12.7
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_HOST: EnvProxy = EnvProxy("CI_SERVER_HOST")
    """
    Host component of the GitLab instance URL, without protocol and port (like gitlab.example.com).

    Added in GitLab 12.1
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_PORT: EnvProxy = EnvProxy("CI_SERVER_PORT")
    """
    Port component of the GitLab instance URL, without host and protocol (like 3000).

    Added in GitLab 12.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_PROTOCOL: EnvProxy = EnvProxy("CI_SERVER_PROTOCOL")
    """
    Protocol component of the GitLab instance URL, without host and port (like https).

    Added in GitLab 12.8
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_NAME: EnvProxy = EnvProxy("CI_SERVER_NAME")
    """
    The name of CI server that is used to coordinate jobs.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_REVISION: EnvProxy = EnvProxy("CI_SERVER_REVISION")
    """
    GitLab revision that is used to schedule jobs.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_VERSION: EnvProxy = EnvProxy("CI_SERVER_VERSION")
    """
    GitLab version that is used to schedule jobs.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_VERSION_MAJOR: EnvProxy = EnvProxy("CI_SERVER_VERSION_MAJOR")
    """
    GitLab version major component.

    Added in GitLab 11.4
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_VERSION_MINOR: EnvProxy = EnvProxy("CI_SERVER_VERSION_MINOR")
    """
    GitLab version minor component.

    Added in GitLab 11.4
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SERVER_VERSION_PATCH: EnvProxy = EnvProxy("CI_SERVER_VERSION_PATCH")
    """
    GitLab version patch component.

    Added in GitLab 11.4
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    CI_SHARED_ENVIRONMENT: OptionalEnvProxy = OptionalEnvProxy("CI_SHARED_ENVIRONMENT")
    """
    Marks that the job is executed in a shared environment (something that
    is persisted across CI invocations like shell or ssh executor).
    If the environment is shared, it is set to true, otherwise it is not
    defined at all.

    Added in GitLab all
    Available in GitLab Runner 10.1
    """

    GITLAB_CI: EnvProxy = EnvProxy("GITLAB_CI")
    """
    Mark that job is executed in GitLab CI/CD environment.

    Added in GitLab all
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    GITLAB_FEATURES: EnvProxy = EnvProxy("GITLAB_FEATURES")
    """
    The comma separated list of licensed features available for your instance and plan.

    Added in GitLab 10.6
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    GITLAB_USER_EMAIL: EnvProxy = EnvProxy("GITLAB_USER_EMAIL")
    """
    The email of the user who started the job.

    Added in GitLab 8.12
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    GITLAB_USER_ID: EnvProxy = EnvProxy("GITLAB_USER_ID")
    """
    The ID of the user who started the job.

    Added in GitLab 8.12
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    GITLAB_USER_LOGIN: EnvProxy = EnvProxy("GITLAB_USER_LOGIN")
    """
    The login username of the user who started the job.

    Added in GitLab 10.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    GITLAB_USER_NAME: EnvProxy = EnvProxy("GITLAB_USER_NAME")
    """
    The real name of the user who started the job.

    Added in GitLab 10.0
    Available in GitLab Runner all

    Raises:
        KeyError: If environment variable not available.
    """

    TRIGGER_PAYLOAD: OptionalEnvProxy = OptionalEnvProxy("TRIGGER_PAYLOAD")
    """
    This variable is available when a pipeline is triggered with a webhook

    Added in GitLab 13.9
    Available in GitLab Runner all
    """

Class variables

var CHAT_CHANNELEnvProxy

Source chat channel which triggered the ChatOps command.

Added in GitLab 10.6 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CHAT_INPUTEnvProxy

Additional arguments passed in the ChatOps command.

Added in GitLab 10.6 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CIEnvProxy

Mark that job is executed in CI environment.

Added in GitLab all Available in GitLab Runner 0.4

Raises

KeyError
If environment variable not available.
var CI_API_V4_URLEnvProxy

The GitLab API v4 root URL.

Added in GitLab 11.7 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_BUILDS_DIREnvProxy

Top-level directory where builds are executed.

Added in GitLab all Available in GitLab Runner 11.10

Raises

KeyError
If environment variable not available.
var CI_COMMIT_BEFORE_SHAEnvProxy

The previous latest commit present on a branch. Is always 0000000000000000000000000000000000000000 in pipelines for merge requests.

Added in GitLab 11.2 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_BRANCHOptionalEnvProxy

The commit branch name. Present in branch pipelines, including pipelines for the default branch. Not present in merge request pipelines or tag pipelines.

Added in GitLab 12.6 Available in GitLab Runner 0.5

var CI_COMMIT_DESCRIPTIONEnvProxy

The description of the commit the message without first line, if the title is shorter than 100 characters; full message in other case.

Added in GitLab 10.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_MESSAGEEnvProxy

The full commit message.

Added in GitLab 10.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_REF_NAMEEnvProxy

The branch or tag name for which project is built.

Added in GitLab 9.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_REF_PROTECTEDEnvProxy

true if the job is running on a protected reference, false if not.

Added in GitLab 11.11 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_REF_SLUGEnvProxy

$CI_COMMIT_REF_NAME in lowercase, shortened to 63 bytes, and with everything except 0-9 and a-z replaced with -. No leading / trailing -. Use in URLs, host names and domain names.

Added in GitLab 9.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_SHAEnvProxy

The commit revision for which project is built.

Added in GitLab 9.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_SHORT_SHAEnvProxy

The first eight characters of CI_COMMIT_SHA.

Added in GitLab 11.7 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_TAGOptionalEnvProxy

The commit tag name. Present only when building tags.

Added in GitLab 9.0 Available in GitLab Runner 0.5

var CI_COMMIT_TAG_MESSAGEOptionalEnvProxy

The commit tag message. Available only in pipelines for tags.

Added in Gitlab 15.5 Available in GitLab Runner all

var CI_COMMIT_TIMESTAMPEnvProxy

The timestamp of the commit in the ISO 8601 format.

Added in GitLab 13.4 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_COMMIT_TITLEEnvProxy

The title of the commit - the full first line of the message.

Added in GitLab 10.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_CONCURRENT_IDEnvProxy

Unique ID of build execution in a single executor.

Added in GitLab all Available in GitLab Runner 11.10

Raises

KeyError
If environment variable not available.
var CI_CONCURRENT_PROJECT_IDEnvProxy

Unique ID of build execution in a single executor and project.

Added in GitLab all Available in GitLab Runner 11.10

Raises

KeyError
If environment variable not available.
var CI_CONFIG_PATHEnvProxy

The path to CI configuration file. Defaults to .gitlab-ci.yml.

Added in GitLab 9.4 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_DEBUG_TRACEEnvProxy

Whether debug logging (tracing) is enabled.

Added in GitLab all Available in GitLab Runner 1.7

Raises

KeyError
If environment variable not available.
var CI_DEFAULT_BRANCHEnvProxy

The name of the default branch for the project.

Added in GitLab 12.4 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIXEnvProxy

The image prefix for pulling images through the Dependency Proxy.

Added in GitLab 13.7 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_DEPENDENCY_PROXY_PASSWORD : str

The password to use to pull images through the Dependency Proxy.

ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering time. Instead the variable string is returned, which is then resolved during pipeline execution. This is because the value contains sensitive information.

Added in GitLab 13.7 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_DEPENDENCY_PROXY_SERVEREnvProxy

The server for logging in to the Dependency Proxy. This is equivalent to $CI_SERVER_HOST:$CI_SERVER_PORT.

Added in GitLab 13.7 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_DEPENDENCY_PROXY_USEREnvProxy

The username to use to pull images through the Dependency Proxy.

Added in GitLab 13.7 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_DEPLOY_FREEZEOptionalEnvProxy

Included with the value true if the pipeline runs during a deploy freeze window.

Added in GitLab 13.2 Available in GitLab Runner all

var CI_DEPLOY_PASSWORD : str

Authentication password of the GitLab Deploy Token, only present if the Project has one related.

ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering time. Instead the variable string is returned, which is then resolved during pipeline execution. This is because the value contains sensitive information.

Added in GitLab 10.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_DEPLOY_USEREnvProxy

Authentication username of the GitLab Deploy Token, only present if the Project has one related.

Added in GitLab 10.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_DISPOSABLE_ENVIRONMENTOptionalEnvProxy

Marks that the job is executed in a disposable environment (something that is created only for this job and disposed of/destroyed after the execution - all executors except shell and ssh). If the environment is disposable, it is set to true, otherwise it is not defined at all.

Added in GitLab all Available in GitLab Runner 10.1

var CI_ENVIRONMENT_NAMEOptionalEnvProxy

The name of the environment for this job. Only present if environment:name is set.

Added in GitLab 8.15 Available in GitLab Runner all

var CI_ENVIRONMENT_SLUGOptionalEnvProxy

A simplified version of the environment name, suitable for inclusion in DNS, URLs, Kubernetes labels, and so on. Only present if environment:name is set.

Added in GitLab 8.15 Available in GitLab Runner all

var CI_ENVIRONMENT_URLOptionalEnvProxy

The URL of the environment for this job. Only present if environment:url is set.

Added in GitLab 9.3 Available in GitLab Runner all

var CI_EXTERNAL_PULL_REQUEST_IIDOptionalEnvProxy

Pull Request ID from GitHub if the pipelines are for external pull requests. Available only if only [external_pull_requests] or rules syntax is used and the pull request is open.

Added in GitLab 12.3 Available in GitLab Runner all

var CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAMEOptionalEnvProxy

The source branch name of the pull request if the pipelines are for external pull requests. Available only if only [external_pull_requests] or rules syntax is used and the pull request is open.

Added in GitLab 12.3 Available in GitLab Runner all

var CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_SHAOptionalEnvProxy

The HEAD SHA of the source branch of the pull request if the pipelines are for external pull requests. Available only if only [external_pull_requests] or rules syntax is used and the pull request is open.

Added in GitLab 12.3 Available in GitLab Runner all

var CI_EXTERNAL_PULL_REQUEST_SOURCE_REPOSITORYOptionalEnvProxy

The source repository name of the pull request if the pipelines are for external pull requests. Available only if only [external_pull_requests] or rules syntax is used and the pull request is open.

Added in GitLab 13.3 Available in GitLab Runner all

var CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_NAMEOptionalEnvProxy

The target branch name of the pull request if the pipelines are for external pull requests. Available only if only [external_pull_requests] or rules syntax is used and the pull request is open.

Added in GitLab 12.3 Available in GitLab Runner all

var CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_SHAOptionalEnvProxy

The HEAD SHA of the target branch of the pull request if the pipelines are for external pull requests. Available only if only [external_pull_requests] or rules syntax is used and the pull request is open.

Added in GitLab 12.3 Available in GitLab Runner all

var CI_EXTERNAL_PULL_REQUEST_TARGET_REPOSITORYOptionalEnvProxy

The target repository name of the pull request if the pipelines are for external pull requests. Available only if only [external_pull_requests] or rules syntax is used and the pull request is open.

Added in GitLab 13.3 Available in GitLab Runner all

var CI_HAS_OPEN_REQUIREMENTSOptionalEnvProxy

Included with the value true only if the pipeline’s project has any open requirements. Not included if there are no open requirements for the pipeline’s project.

Added in GitLab 13.1 Available in GitLab Runner all

var CI_JOB_IDEnvProxy

The unique ID of the current job that GitLab CI/CD uses internally.

Added in GitLab 9.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_IMAGEEnvProxy

The name of the image running the CI job.

Added in GitLab 12.9 Available in GitLab Runner 12.9

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_JWT : str

RS256 JSON web token that can be used for authenticating with third party systems that support JWT authentication, for example HashiCorp’s Vault.

ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering time. Instead the variable string is returned, which is then resolved during pipeline execution. This is because the value contains sensitive information.

Added in GitLab 12.10 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_MANUALEnvProxy

The flag to indicate that job was manually started.

Added in GitLab 8.12 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_NAMEEnvProxy

The name of the job as defined in .gitlab-ci.yml.

Added in GitLab 9.0 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_STAGEEnvProxy

The name of the stage as defined in .gitlab-ci.yml.

Added in GitLab 9.0 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_STATUSEnvProxy

The state of the job as each runner stage is executed. Use with after_script where CI_JOB_STATUS can be either success, failed or canceled.

Added in GitLab all Available in GitLab Runner 13.5

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_TOKEN : str

Token used for authenticating with a few API endpoints and downloading dependent repositories. The token is valid as long as the job is running.

ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering time. Instead the variable string is returned, which is then resolved during pipeline execution. This is because the value contains sensitive information.

Added in GitLab 9.0 Available in GitLab Runner 1.2

Raises

KeyError
If environment variable not available.

Raises

KeyError
If environment variable not available.
var CI_JOB_URLEnvProxy

Job details URL.

Added in GitLab 11.1 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_KUBERNETES_ACTIVEOptionalEnvProxy

Included with the value true only if the pipeline has a Kubernetes cluster available for deployments. Not included if no cluster is available. Can be used as an alternative to only:kubernetes/except:kubernetes with rules:if.

Added in GitLab 13.0 Available in GitLab Runner all

var CI_MERGE_REQUEST_ASSIGNEESOptionalEnvProxy

Comma-separated list of username(s) of assignee(s) for the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.9 Available in GitLab Runner all

var CI_MERGE_REQUEST_DIFF_BASE_SHAOptionalEnvProxy

The base SHA of the merge request diff, if the pipelines are for merge requests.

Added in GitLab 13.7 Available in GitLab Runner all

var CI_MERGE_REQUEST_DIFF_IDOptionalEnvProxy

The version of the merge request diff, if the pipelines are for merge requests.

Added in GitLab 13.7 Available in GitLab Runner all

var CI_MERGE_REQUEST_EVENT_TYPEOptionalEnvProxy

The event type of the merge request, if the pipelines are for merge requests. Can be detached, merged_result or merge_train.

Added in GitLab 12.3 Available in GitLab Runner all

var CI_MERGE_REQUEST_IDOptionalEnvProxy

The instance-level ID of the merge request. Only available if the pipelines are for merge requests and the merge request is created. This is a unique ID across all projects on GitLab.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_IIDOptionalEnvProxy

The project-level IID (internal ID) of the merge request. Only available If the pipelines are for merge requests and the merge request is created. This ID is unique for the current project.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_LABELSOptionalEnvProxy

Comma-separated label names of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.9 Available in GitLab Runner all

var CI_MERGE_REQUEST_MILESTONEOptionalEnvProxy

The milestone title of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.9 Available in GitLab Runner all

var CI_MERGE_REQUEST_PROJECT_IDOptionalEnvProxy

The ID of the project of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_PROJECT_PATHOptionalEnvProxy

The path of the project of the merge request if the pipelines are for merge requests (for example stage/awesome-project). Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_PROJECT_URLOptionalEnvProxy

The URL of the project of the merge request if the pipelines are for merge requests (for example http://192.168.10.15:3000/stage/awesome-project). Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_REF_PATHOptionalEnvProxy

The ref path of the merge request if the pipelines are for merge requests. (for example refs/merge-requests/1/head). Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_SOURCE_BRANCH_NAMEOptionalEnvProxy

The source branch name of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_SOURCE_BRANCH_SHAOptionalEnvProxy

The HEAD SHA of the source branch of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used, the merge request is created, and the pipeline is a merged result pipeline.

Added in GitLab 11.9 Available in GitLab Runner all

var CI_MERGE_REQUEST_SOURCE_PROJECT_IDOptionalEnvProxy

The ID of the source project of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_SOURCE_PROJECT_PATHOptionalEnvProxy

The path of the source project of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_SOURCE_PROJECT_URLOptionalEnvProxy

The URL of the source project of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_TARGET_BRANCH_NAMEOptionalEnvProxy

The target branch name of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.6 Available in GitLab Runner all

var CI_MERGE_REQUEST_TARGET_BRANCH_SHAOptionalEnvProxy

The HEAD SHA of the target branch of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used, the merge request is created, and the pipeline is a merged result pipeline.

Added in GitLab 11.9 Available in GitLab Runner all

var CI_MERGE_REQUEST_TITLEOptionalEnvProxy

The title of the merge request if the pipelines are for merge requests. Available only if only [merge_requests] or rules syntax is used and the merge request is created.

Added in GitLab 11.9 Available in GitLab Runner all

var CI_NODE_INDEXOptionalEnvProxy

Index of the job in the job set. If the job is not parallelized, this variable is not set.

Added in GitLab 11.5 Available in GitLab Runner all

var CI_NODE_TOTALEnvProxy

Total number of instances of this job running in parallel. If the job is not parallelized, this variable is set to 1.

Added in GitLab 11.5 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_OPEN_MERGE_REQUESTSOptionalEnvProxy

Available in branch and merge request pipelines. Contains a comma-separated list of up to four merge requests that use the current branch and project as the merge request source. For example gitlab-org/gitlab!333,gitlab-org/gitlab-foss!11.

Added in GitLab 13.8 Available in GitLab Runner all

var CI_PAGES_DOMAINEnvProxy

The configured domain that hosts GitLab Pages.

Added in GitLab 11.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PAGES_URLEnvProxy

URL to GitLab Pages-built pages. Always belongs to a subdomain of CI_PAGES_DOMAIN.

Added in GitLab 11.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PIPELINE_IDEnvProxy

The instance-level ID of the current pipeline. This is a unique ID across all projects on GitLab.

Added in GitLab 8.10 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PIPELINE_IIDEnvProxy

The project-level IID (internal ID) of the current pipeline. This ID is unique for the current project.

Added in GitLab 11.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PIPELINE_SOURCEEnvProxy

Indicates how the pipeline was triggered. Possible options are push, web, schedule, api, external, chat, webide, merge_request_event, external_pull_request_event, parent_pipeline, trigger, or pipeline. For pipelines created before GitLab 9.5, this is displayed as unknown.

Added in GitLab 10.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PIPELINE_TRIGGEREDEnvProxy

The flag to indicate that job was triggered.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PIPELINE_URLEnvProxy

Pipeline details URL.

Added in GitLab 11.1 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_PROJECT_CONFIG_PATHEnvProxy

The CI configuration path for the project.

Added in GitLab 13.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PROJECT_DIREnvProxy

The full path where the repository is cloned and where the job is run. If the GitLab Runner builds_dir parameter is set, this variable is set relative to the value of builds_dir. For more information, see Advanced configuration for GitLab Runner.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PROJECT_IDEnvProxy

The unique ID of the current project that GitLab CI/CD uses internally.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PROJECT_NAMEEnvProxy

The name of the directory for the project that is being built. For example, if the project URL is gitlab.example.com/group-name/project-1, the CI_PROJECT_NAME would be project-1.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_PROJECT_NAMESPACEEnvProxy

The project stage (username or group name) that is being built.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_PROJECT_PATHEnvProxy

The stage with project name.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_PROJECT_PATH_SLUGEnvProxy

$CI_PROJECT_PATH in lowercase and with everything except 0-9 and a-z replaced with -. Use in URLs and domain names.

Added in GitLab 9.3 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PROJECT_REPOSITORY_LANGUAGESEnvProxy

Comma-separated, lowercase list of the languages used in the repository (for example ruby,javascript,html,css).

Added in GitLab 12.3 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PROJECT_ROOT_NAMESPACEEnvProxy

The root project stage (username or group name) that is being built. For example, if CI_PROJECT_NAMESPACE is root-group/child-group/grandchild-group, CI_PROJECT_ROOT_NAMESPACE would be root-group.

Added in GitLab 13.2 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_PROJECT_TITLEEnvProxy

The human-readable project name as displayed in the GitLab web interface.

Added in GitLab 12.4 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_PROJECT_URLEnvProxy

The HTTP(S) address to access project.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_PROJECT_VISIBILITYEnvProxy

The project visibility (internal, private, public).

Added in GitLab 10.3 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_REGISTRYOptionalEnvProxy

GitLab Container Registry. This variable includes a :port value if one has been specified in the registry configuration.

Added in GitLab 8.10 Available in GitLab Runner 0.5

var CI_REGISTRY_IMAGEOptionalEnvProxy

the address of the registry tied to the specific project.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_REGISTRY_PASSWORD : str

The password to use to push containers to the GitLab Container Registry, for the current project.

ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering time. Instead the variable string is returned, which is then resolved during pipeline execution. This is because the value contains sensitive information.

Added in GitLab 9.0 Available in GitLab Runner all

var CI_REGISTRY_USEROptionalEnvProxy

The username to use to push containers to the GitLab Container Registry, for the current project.

Added in GitLab 9.0 Available in GitLab Runner all

var CI_REPOSITORY_URL : str

The URL to clone the Git repository.

ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering time. Instead the variable string is returned, which is then resolved during pipeline execution. This is because the value contains sensitive information.

Added in GitLab 9.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_RUNNER_DESCRIPTIONEnvProxy

The description of the runner as saved in GitLab.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_RUNNER_EXECUTABLE_ARCHEnvProxy

The OS/architecture of the GitLab Runner executable (note that this is not necessarily the same as the environment of the executor).

Added in GitLab all Available in GitLab Runner 10.6

Raises

KeyError
If environment variable not available.
var CI_RUNNER_IDEnvProxy

The unique ID of runner being used.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_RUNNER_REVISIONEnvProxy

GitLab Runner revision that is executing the current job.

Added in GitLab all Available in GitLab Runner 10.6

Raises

KeyError
If environment variable not available.
var CI_RUNNER_SHORT_TOKEN : str

First eight characters of the runner’s token used to authenticate new job requests. Used as the runner’s unique ID.

ATTENTION: Contrary to most other variables in this class, this variable is not resolved at rendering time. Instead the variable string is returned, which is then resolved during pipeline execution. This is because the value contains sensitive information.

Added in GitLab all Available in GitLab Runner 12.3

Raises

KeyError
If environment variable not available.
var CI_RUNNER_TAGSEnvProxy

The defined runner tags.

Added in GitLab 8.10 Available in GitLab Runner 0.5

Raises

KeyError
If environment variable not available.
var CI_RUNNER_VERSIONEnvProxy

GitLab Runner version that is executing the current job.

Added in GitLab all Available in GitLab Runner 10.6

Raises

KeyError
If environment variable not available.
var CI_SERVEREnvProxy

Mark that job is executed in CI environment.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_HOSTEnvProxy

Host component of the GitLab instance URL, without protocol and port (like gitlab.example.com).

Added in GitLab 12.1 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_NAMEEnvProxy

The name of CI server that is used to coordinate jobs.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_PORTEnvProxy

Port component of the GitLab instance URL, without host and protocol (like 3000).

Added in GitLab 12.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_PROTOCOLEnvProxy

Protocol component of the GitLab instance URL, without host and port (like https).

Added in GitLab 12.8 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_REVISIONEnvProxy

GitLab revision that is used to schedule jobs.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_URLEnvProxy

The base URL of the GitLab instance, including protocol and port (like https://gitlab.example.com:8080).

Added in GitLab 12.7 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_VERSIONEnvProxy

GitLab version that is used to schedule jobs.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_VERSION_MAJOREnvProxy

GitLab version major component.

Added in GitLab 11.4 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_VERSION_MINOREnvProxy

GitLab version minor component.

Added in GitLab 11.4 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SERVER_VERSION_PATCHEnvProxy

GitLab version patch component.

Added in GitLab 11.4 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var CI_SHARED_ENVIRONMENTOptionalEnvProxy

Marks that the job is executed in a shared environment (something that is persisted across CI invocations like shell or ssh executor). If the environment is shared, it is set to true, otherwise it is not defined at all.

Added in GitLab all Available in GitLab Runner 10.1

var GITLAB_CIEnvProxy

Mark that job is executed in GitLab CI/CD environment.

Added in GitLab all Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var GITLAB_FEATURESEnvProxy

The comma separated list of licensed features available for your instance and plan.

Added in GitLab 10.6 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var GITLAB_USER_EMAILEnvProxy

The email of the user who started the job.

Added in GitLab 8.12 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var GITLAB_USER_IDEnvProxy

The ID of the user who started the job.

Added in GitLab 8.12 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var GITLAB_USER_LOGINEnvProxy

The login username of the user who started the job.

Added in GitLab 10.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var GITLAB_USER_NAMEEnvProxy

The real name of the user who started the job.

Added in GitLab 10.0 Available in GitLab Runner all

Raises

KeyError
If environment variable not available.
var TRIGGER_PAYLOADOptionalEnvProxy

This variable is available when a pipeline is triggered with a webhook

Added in GitLab 13.9 Available in GitLab Runner all

class Retry (*, max: int, when: Optional[List[RetryWhen]] = None, exit_codes: Optional[List[int]] = None)

This module represents the Gitlab CI Retry keyword.

Use Retry to specify a retry count to use for the Job.

Args

max : int
Maximum number of job retrys. As of the Gitlab CI documentation in 2024, the number cannot be higher than 2.
when : Optional[List[RetryWhen]]
Use retry:when with retry:max to retry jobs for only specific failure cases.
exit_codes : Optional[List[int]]
Use retry:exit_codes with retry:max to retry jobs for only specific failure cases.
Expand source code
class Retry:
    """This module represents the Gitlab CI [Retry](https://docs.gitlab.com/ee/ci/yaml/#retry) keyword.

    Use `Retry` to specify a retry count to use for the `gcip.core.job.Job`.

    Args:
        max (int): Maximum number of job retrys. As of the Gitlab CI documentation in 2024, the
            number cannot be higher than 2.
        when (Optional[List[RetryWhen]]): Use retry:when with retry:max to retry jobs for
            only specific failure cases.
        exit_codes (Optional[List[int]]): Use retry:exit_codes with retry:max to retry jobs for
            only specific failure cases.
    """

    def __init__(
        self,
        *,
        max: int,
        when: Optional[List[RetryWhen]] = None,
        exit_codes: Optional[List[int]] = None,
    ) -> None:
        self._validate_max(max)

        self._max = max
        self._when = when
        self._exit_codes = exit_codes

    def render(self) -> Dict[str, Union[int, Union[List[int], List[str]]]]:
        """Return a representation of this Retry object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Union[str, List[str]]]: A dictionary prepresenting the retry object in Gitlab CI.
        """
        rendered: Dict[str, Union[int, Union[List[int], List[str]]]] = {}

        rendered["max"] = self.max

        if self._when:
            rendered["when"] = [item.value for item in self._when]

        if self._exit_codes:
            rendered["exit_codes"] = deepcopy(self._exit_codes)

        return rendered

    def _equals(self, retry: Optional[Retry]) -> bool:
        """
        Returns:
            bool: True if self equals to `retry`.
        """
        if not retry:
            return False

        return self.render() == retry.render()

    def _validate_max(self, value: int) -> None:
        assert value >= 0, "The maximum number of retries cannot be negative."
        assert (
            value <= 2
        ), "As of the Gitlab CI documentation in 2024 the maximum number of retries is 2."

    @property
    def max(self) -> int:
        return self._max

    @max.setter
    def max(self, value: int) -> None:
        self._validate_max(value)
        self._max = value

Instance variables

prop max : int
Expand source code
@property
def max(self) -> int:
    return self._max

Methods

def render(self) ‑> Dict[str, Union[int, List[int], List[str]]]

Return a representation of this Retry object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Union[str, List[str]]]
A dictionary prepresenting the retry object in Gitlab CI.
class RetryWhen (*args, **kwds)

Create a collection of name/value pairs.

Example enumeration:

>>> class Color(Enum):
...     RED = 1
...     BLUE = 2
...     GREEN = 3

Access them by:

  • attribute access::
>>> Color.RED
<Color.RED: 1>
  • value lookup:
>>> Color(1)
<Color.RED: 1>
  • name lookup:
>>> Color['RED']
<Color.RED: 1>

Enumerations can be iterated over, and know how many members they have:

>>> len(Color)
3
>>> list(Color)
[<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]

Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.

Expand source code
class RetryWhen(Enum):
    always = "always"
    unknown_failure = "unknown_failure"
    script_failure = "script_failure"
    api_failure = "api_failure"
    stuck_or_timeout_failure = "stuck_or_timeout_failure"
    runner_system_failure = "runner_system_failure"
    runner_unsupported = "runner_unsupported"
    stale_schedule = "stale_schedule"
    job_execution_timeout = "job_execution_timeout"
    archived_failure = "archived_failure"
    unmet_prerequisites = "unmet_prerequisites"
    scheduler_failure = "scheduler_failure"
    data_integrity_failure = "data_integrity_failure"

Ancestors

  • enum.Enum

Class variables

var always
var api_failure
var archived_failure
var data_integrity_failure
var job_execution_timeout
var runner_system_failure
var runner_unsupported
var scheduler_failure
var script_failure
var stale_schedule
var stuck_or_timeout_failure
var unknown_failure
var unmet_prerequisites
class Rule (*, if_statement: Optional[str] = None, when: Optional[WhenStatement] = None, allow_failure: Optional[bool] = None, changes: Optional[List[str]] = None, exists: Optional[List[str]] = None, variables: Optional[Dict[str, str]] = None)

This module represents the Gitlab CI rules keyword.

Use rules to include or exclude jobs in pipelines.

Args

if_statement : Optional[str], optional
The rules:if clause which decides when a job to the pipeline. Defaults to None.
when : WhenStatement, optional
The when attribute which decides when to run a job. Defaults to 'None', which means not set.
allow_failure : bool, optional
The allow_failure attribute which let a job fail without impacting the rest of the CI suite. Defaults to 'None', which means not set.
changes : Optional[List[str]]
The changes attribute which adds a job to the pipeline by checking for changes on specific files
exists : Optional[List[str]]
The exists attribute which allows to run a job when a certain files exist in the repository
variables : Optional[Dict[str, str]]
The variables attribute allows defining or overwriting variables when the conditions are met
Expand source code
class Rule:
    """This module represents the Gitlab CI [rules](https://docs.gitlab.com/ee/ci/yaml/#rules) keyword.

    Use `rules` to include or exclude jobs in pipelines.

    Args:
        if_statement (Optional[str], optional): The [rules:if clause](https://docs.gitlab.com/ee/ci/yaml/#when) which decides when
            a job to the pipeline. Defaults to None.
        when (WhenStatement, optional): The [when](https://docs.gitlab.com/ee/ci/yaml/#when) attribute which decides when to run a job.
            Defaults to 'None', which means not set.
        allow_failure (bool, optional): The [allow_failure](https://docs.gitlab.com/ee/ci/yaml/#allow_failure) attribute which let a
            job fail without impacting the rest of the CI suite. Defaults to 'None', which means not set.
        changes (Optional[List[str]]): The [changes](https://docs.gitlab.com/ee/ci/yaml/#ruleschanges) attribute which adds a job
            to the pipeline by checking for changes on specific files
        exists (Optional[List[str]]): The [exists](https://docs.gitlab.com/ee/ci/yaml/#rulesexists) attribute which allows to run
            a job when a certain files exist in the repository
        variables (Optional[Dict[str, str]]): The [variables](https://docs.gitlab.com/ee/ci/yaml/#rulesvariables) attribute allows
            defining or overwriting variables when the conditions are met
    """

    def __init__(
        self,
        *,
        if_statement: Optional[str] = None,
        when: Optional[WhenStatement] = None,
        allow_failure: Optional[bool] = None,
        changes: Optional[List[str]] = None,
        exists: Optional[List[str]] = None,
        variables: Optional[Dict[str, str]] = None,
    ) -> None:
        self._if = if_statement
        self._changes = changes
        self._when = when
        self._exists = exists
        self._allow_failure = allow_failure
        self._variables = variables if variables is not None else {}

    def never(self) -> Rule:
        """
        This method returns a copy of this rule with the `when` attribute set to `WhenStatement.NEVER`.

        This method is intended to be used for predefined rules. For instance you have defined an
        often used rule `on_master` whose if statement checks if the pipeline is executed on branch
        `master`. Then you can either run a job, if on master...

        ```
        my_job.append_rules(on_master)
        ```

        ... or do not run a job if on master...

        ```
        my_job.append_rules(on_master.never())
        ```

        Returns:
            Rule: A new rule object with `when` set to `WhenStatement.NEVER`.
        """
        rule_copy = copy.deepcopy(self)
        rule_copy._when = WhenStatement.NEVER
        return rule_copy

    def add_variables(self, **variables: str) -> Rule:
        """
        Adds one or more [variables](https://docs.gitlab.com/ee/ci/yaml/#variables), each as keyword argument,
        to the rule.

        Args:
            **variables (str): Each variable would be provided as keyword argument:
        ```
        rule.add_variables(GREETING="hello", LANGUAGE="python")
        ```

        Returns:
            `Rule`: The modified `Rule` object.
        """
        self._variables.update(variables)
        return self

    def _equals(self, rule: Optional[Rule]) -> bool:
        """
        Returns:
            bool: True if self equals to `rule`.
        """
        if not rule:
            return False

        return self.render() == rule.render()

    def render(self) -> Dict[str, Union[str, bool, List[str], Dict[str, str]]]:
        """Return a representation of this Rule object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Any]: A dictionary representing the rule object in Gitlab CI.
        """
        rendered_rule: Dict[str, Union[str, bool, List[str], Dict[str, str]]] = {}
        if self._if:
            rendered_rule.update({"if": self._if})

        if self._changes:
            rendered_rule["changes"] = self._changes

        if self._exists:
            rendered_rule["exists"] = self._exists

        if self._variables:
            rendered_rule["variables"] = self._variables

        if self._allow_failure is not None:
            rendered_rule["allow_failure"] = self._allow_failure

        if self._when:
            rendered_rule["when"] = self._when.value

        return rendered_rule

Methods

def add_variables(self, **variables: str) ‑> Rule

Adds one or more variables, each as keyword argument, to the rule.

Args

**variables : str
Each variable would be provided as keyword argument:
rule.add_variables(GREETING="hello", LANGUAGE="python")

Returns

Rule: The modified Rule object.

def never(self) ‑> Rule

This method returns a copy of this rule with the when attribute set to WhenStatement.NEVER.

This method is intended to be used for predefined rules. For instance you have defined an often used rule on_master whose if statement checks if the pipeline is executed on branch master. Then you can either run a job, if on master…

my_job.append_rules(on_master)

… or do not run a job if on master…

my_job.append_rules(on_master.never())

Returns

Rule
A new rule object with when set to WhenStatement.NEVER.
def render(self) ‑> Dict[str, Union[str, bool, List[str], Dict[str, str]]]

Return a representation of this Rule object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Any]
A dictionary representing the rule object in Gitlab CI.
class Sequence

A Sequence collects multiple Jobs and/or other Sequences into a group.

Expand source code
class Sequence:
    """A Sequence collects multiple `gcip.core.job.Job`s and/or other `gcip.core.sequence.Sequence`s into a group."""

    def __init__(self) -> None:
        super().__init__()
        self._children: List[ChildDict] = list()
        self._image_for_initialization: Optional[Union[Image, str]] = None
        self._image_for_replacement: Optional[Union[Image, str]] = None
        self._environment_for_initialization: Optional[Union[Environment, str]] = None
        self._environment_for_replacement: Optional[Union[Environment, str]] = None
        self._retry_for_initialization: Optional[Union[Retry, int]] = None
        self._retry_for_replacement: Optional[Union[Retry, int]] = None
        self._when_for_initialization: Optional[WhenStatement] = None
        self._when_for_replacement: Optional[WhenStatement] = None
        self._timeout_for_initialization: Optional[str] = None
        self._timeout_for_replacement: Optional[str] = None
        self._resource_group_for_initialization: Optional[str] = None
        self._resource_group_for_replacement: Optional[str] = None
        self._allow_failure_for_initialization: Optional[
            Union[bool, str, int, List[int]]
        ] = "untouched"
        self._allow_failure_for_replacement: Optional[
            Union[bool, str, int, List[int]]
        ] = "untouched"
        self._variables: Dict[str, str] = {}
        self._variables_for_initialization: Dict[str, str] = {}
        self._variables_for_replacement: Dict[str, str] = {}
        self._tags: OrderedSetType = {}
        self._tags_for_initialization: OrderedSetType = {}
        self._tags_for_replacement: OrderedSetType = {}
        self._artifacts: Optional[Artifacts] = None
        self._artifacts_for_initialization: Optional[Artifacts] = None
        self._artifacts_for_replacement: Optional[Artifacts] = None
        self._cache: Optional[Cache] = None
        self._cache_for_initialization: Optional[Cache] = None
        self._scripts_to_prepend: List[str] = []
        self._scripts_to_append: List[str] = []
        self._rules_to_append: List[Rule] = []
        self._rules_to_prepend: List[Rule] = []
        self._rules_for_initialization: List[Rule] = []
        self._rules_for_replacement: List[Rule] = []
        self._dependencies: Optional[List[Union[Job, Sequence]]] = None
        self._dependencies_for_initialization: Optional[List[Union[Job, Sequence]]] = (
            None
        )
        self._dependencies_for_replacement: Optional[List[Union[Job, Sequence]]] = None
        self._needs: Optional[List[Union[Need, Job, Sequence]]] = None
        self._needs_for_initialization: Optional[List[Union[Need, Job, Sequence]]] = (
            None
        )
        self._needs_for_replacement: Optional[List[Union[Need, Job, Sequence]]] = None
        self._parents: List[Sequence] = list()

    def _add_parent(self, parent: Sequence) -> None:
        self._parents.append(parent)

    def add_children(
        self,
        *jobs_or_sequences: Union[Job, Sequence],
        stage: Optional[str] = None,
        name: Optional[str] = None,
    ) -> Sequence:
        """Add `gcip.core.job.Job`s or other `gcip.core.sequence.Sequence`s to this sequence.

        Adding a child creates a copy of that child. You should provide a name or stage
        when adding children, to make them different from other places where they will be used.

        Args:
            jobs_or_sequences (Union[Job, Sequence]): One or more jobs or sequences to be added to this sequence.
            stage (Optional[str], optional): Adds a stages component to all children added. Defaults to None.
            name (Optional[str], optional): Adds a name component to all children added. Defaults to None.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        for child in jobs_or_sequences:
            child._add_parent(self)
            self._children.append({"child": child, "stage": stage, "name": name})
        return self

    def add_variables(self, **variables: str) -> Sequence:
        """Calling `gcip.core.job.Job.add_variables()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._variables.update(variables)
        return self

    def initialize_variables(self, **variables: str) -> Sequence:
        """Calling `gcip.core.job.Job.add_variables()` to all jobs within this sequence that haven't been added variables before.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._variables_for_initialization.update(variables)
        return self

    def override_variables(self, **variables: str) -> Sequence:
        """Calling `gcip.core.job.Job.add_variables()` to all jobs within this sequence and overriding any previously added variables to that jobs.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._variables_for_replacement.update(variables)
        return self

    def set_cache(self, cache: Cache) -> Sequence:
        """Calling `gcip.core.job.Job.set_cache()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._cache = cache
        return self

    def initialize_cache(self, cache: Cache) -> Sequence:
        """Calling `gcip.core.job.Job.set_cache()` to all jobs within this sequence that haven't been set the cache before.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._cache_for_initialization = cache
        return self

    def set_artifacts(self, artifacts: Artifacts) -> Sequence:
        """Sets `gcip.core.job.Job.artifacts` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._artifacts = artifacts
        return self

    def initialize_artifacts(self, artifacts: Artifacts) -> Sequence:
        """Sets `gcip.core.job.Job.artifacts` to all jobs within this sequence that haven't been set the artifacs before.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._artifacts_for_initialization = artifacts
        return self

    def override_artifacts(self, artifacts: Artifacts) -> Sequence:
        """Calling `gcip.core.job.Job.set_artifacts()` to all jobs within this sequence and overriding any previously added artifacts to that jobs.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._artifacts_for_initialization = artifacts
        return self

    def add_tags(self, *tags: str) -> Sequence:
        """Calling `gcip.core.job.Job.add_tags()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        for tag in tags:
            self._tags[tag] = None
        return self

    def initialize_tags(self, *tags: str) -> Sequence:
        """Calling `gcip.core.job.Job.add_tags()` to all jobs within this sequence that haven't been added tags before.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        for tag in tags:
            self._tags_for_initialization[tag] = None
        return self

    def override_tags(self, *tags: str) -> Sequence:
        """Calling `gcip.core.job.Job.add_tags()` to all jobs within this sequence and overriding any previously added tags to that jobs.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        for tag in tags:
            self._tags_for_replacement[tag] = None
        return self

    def append_rules(self, *rules: Rule) -> Sequence:
        """Calling `gcip.core.job.Job.append_rules()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._rules_to_append.extend(rules)
        return self

    def prepend_rules(self, *rules: Rule) -> Sequence:
        """Calling `gcip.core.job.Job.prepend_rules()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._rules_to_prepend = list(rules) + self._rules_to_prepend
        return self

    def initialize_rules(self, *rules: Rule) -> Sequence:
        """Calling `gcip.core.job.Job.append_rules()` to all jobs within this sequence that haven't been added rules before.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._rules_for_initialization.extend(rules)
        return self

    def override_rules(self, *rules: Rule) -> Sequence:
        """Calling `gcip.core.job.Job.override_rules()` to all jobs within this sequence and overriding any previously added rules to that jobs.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._rules_for_replacement.extend(rules)
        return self

    def add_dependencies(self, *dependencies: Union[Job, Sequence]) -> Sequence:
        """Calling `gcip.core.job.Job.add_dependencies()` to all jobs within the first stage of this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if self._dependencies is None:
            self._dependencies = []
        self._dependencies.extend(dependencies)
        return self

    def initialize_dependencies(self, *dependencies: Union[Job, Sequence]) -> Sequence:
        """Calling `gcip.core.job.Job.set_dependencies()` to all jobs within the first stage of this sequence that haven't been added dependencies before.
        An empty parameter list means that jobs will get an empty dependency list and thus does not download artifacts by default.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._dependencies_for_initialization = list(dependencies)
        return self

    def override_dependencies(self, *dependencies: Union[Job, Sequence]) -> Sequence:
        """
        Calling `gcip.core.job.Job.set_dependencies()` to all jobs within the first stage of this sequence and overriding any previously added
        dependencies to that jobs.
        An empty parameter list means that jobs will get an empty dependency list and thus does not download artifacts.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._dependencies_for_replacement = list(dependencies)
        return self

    def add_needs(self, *needs: Union[Need, Job, Sequence]) -> Sequence:
        """Calling `gcip.core.job.Job.add_need()` to all jobs within the first stage of this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if self._needs is None:
            self._needs = []
        self._needs.extend(needs)
        return self

    def initialize_needs(self, *needs: Union[Need, Job, Sequence]) -> Sequence:
        """Calling `gcip.core.job.Job.set_needs()` to all jobs within the first stage of this sequence that haven't been added needs before.
        An empty parameter list means that jobs will get an empty dependency list and thus does not depend on other jobs by default.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._needs_for_initialization = list(needs)
        return self

    def override_needs(self, *needs: Union[Need, Job, Sequence]) -> Sequence:
        """Calling `gcip.core.job.Job.set_needs()` to all jobs within the first stage of this sequence and overriding any previously added needs to that jobs.
        An empty parameter list means that jobs will get an empty dependency list and thus does not depend on other jobs.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._needs_for_replacement = list(needs)
        return self

    def prepend_scripts(self, *scripts: str) -> Sequence:
        """Calling `gcip.core.job.Job.prepend_scripts()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._scripts_to_prepend = list(scripts) + self._scripts_to_prepend
        return self

    def append_scripts(self, *scripts: str) -> Sequence:
        """Calling `gcip.core.job.Job.append_scripts()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._scripts_to_append.extend(scripts)
        return self

    def initialize_image(self, image: Union[Image, str]) -> Sequence:
        """Calling `gcip.core.job.Job.set_image()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if image:
            self._image_for_initialization = image
        return self

    def override_image(self, image: Union[Image, str]) -> Sequence:
        """Calling `gcip.core.job.Job.set_image()` to all jobs within this sequence overriding any previous set value.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if image:
            self._image_for_replacement = image
        return self

    def initialize_environment(
        self, environment: Optional[Union[Environment, str]]
    ) -> Sequence:
        """Calling `gcip.core.job.Job.set_environment()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if environment:
            self._environment_for_initialization = environment
        return self

    def override_environment(
        self, environment: Optional[Union[Environment, str]]
    ) -> Sequence:
        """Calling `gcip.core.job.Job.set_environment()` to all jobs within this sequence overriding any previous set value.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if environment:
            self._environment_for_replacement = environment
        return self

    def initialize_retry(self, retry: Optional[Union[Retry, int]]) -> Sequence:
        """Calling `gcip.core.job.Job.set_retry()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if retry:
            self._retry_for_initialization = retry
        return self

    def override_retry(self, retry: Optional[Union[Retry, int]]) -> Sequence:
        """Calling `gcip.core.job.Job.set_retry()` to all jobs within this sequence overriding any previous set value.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if retry:
            self._retry_for_replacement = retry
        return self

    def initialize_when(self, when: Optional[WhenStatement]) -> Sequence:
        """Calling `gcip.core.job.Job.set_when()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if when:
            self._when_for_initialization = when
        return self

    def override_when(self, when: Optional[WhenStatement]) -> Sequence:
        """Calling `gcip.core.job.Job.set_when()` to all jobs within this sequence overriding any previous set value.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if when:
            self._when_for_replacement = when
        return self

    def initialize_timeout(self, timeout: Optional[str]) -> Sequence:
        """Calling `gcip.core.job.Job.set_timeout()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if timeout:
            self._timeout_for_initialization = timeout
        return self

    def override_timeout(self, timeout: Optional[str]) -> Sequence:
        """Calling `gcip.core.job.Job.set_timeout()` to all jobs within this sequence overriding any previous set value.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if timeout:
            self._timeout_for_replacement = timeout
        return self

    def initialize_resource_group(self, resource_group: Optional[str]) -> Sequence:
        """Calling `gcip.core.job.Job.set_resource_group()` to all jobs within this sequence.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if resource_group:
            self._resource_group_for_initialization = resource_group
        return self

    def override_resource_group(self, resource_group: Optional[str]) -> Sequence:
        """Calling `gcip.core.job.Job.set_resource_group()` to all jobs within this sequence overriding any previous set value.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        if resource_group:
            self._resource_group_for_replacement = resource_group
        return self

    def initialize_allow_failure(
        self, allow_failure: Optional[Union[bool, str, int, List[int]]]
    ) -> Sequence:
        """Calling `gcip.core.job.Job.set_allow_failure()` to all jobs within this sequence that haven't been set the allow_failure before.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._allow_failure_for_initialization = allow_failure
        return self

    def override_allow_failure(
        self, allow_failure: Optional[Union[bool, str, int, List[int]]]
    ) -> Sequence:
        """Calling `gcip.core.job.Job.set_allow_failure()` to all jobs within this sequence overriding any previous set value.

        Returns:
            `Sequence`: The modified `Sequence` object.
        """
        self._allow_failure_for_replacement = allow_failure
        return self

    def _get_all_instance_names(self, child: Union[Job, Sequence]) -> Set[str]:
        """Return all instance names from the given child.

        That means all combinations of the childs name and stage within this
        sequence and all parent sequences.
        """

        # first get all instance names from parents of this sequence
        own_instance_names: Set[str] = set()
        for parent in self._parents:
            own_instance_names.update(parent._get_all_instance_names(self))

        # second get all instance names of the child within this sequence
        child_instance_names: Set[str] = set()
        child_instance_name: str
        for item in self._children:
            if item["child"] is child:
                child_name = item["name"]
                child_stage = item["stage"]
                if child_stage:
                    if child_name:
                        child_instance_name = f"{child_name}-{child_stage}"
                    else:
                        child_instance_name = child_stage
                elif child_name:
                    child_instance_name = child_name
                else:
                    child_instance_name = ""

                # all job names have '-' instead of '_'
                child_instance_names.add(child_instance_name.replace("_", "-"))

        # third combine all instance names of this sequences
        # with all instance names of the child
        return_values: Set[str] = set()
        if own_instance_names:
            for child_instance_name in child_instance_names:
                for instance_name in own_instance_names:
                    if child_instance_name and instance_name:
                        return_values.add(f"{instance_name}-{child_instance_name}")
                    elif child_instance_name:
                        return_values.add(child_instance_name)
                    else:
                        return_values.add(instance_name)
        else:
            return_values = child_instance_names

        return return_values

    @property
    def last_jobs_executed(self) -> List[Job]:
        """This property returns all Jobs from the last stage of this sequence.

        This is typically be requested from a job which has setup this sequence as need,
        to determine all actual jobs of this sequence as need.
        """
        all_jobs = self.populated_jobs
        stages: Dict[str, None] = {}
        for job in all_jobs:
            # use the keys of dictionary as ordered set
            stages[job.stage] = None

        last_stage = list(stages.keys())[-1]
        last_executed_jobs: List[Job] = list()
        for job in all_jobs:
            if job._stage == last_stage:
                if job._original:
                    last_executed_jobs.append(job._original)
                else:
                    raise AttributeError(
                        "job._original is None, because the job is not a copy of another job"
                    )

        return last_executed_jobs

    def find_jobs(
        self, *job_filters: JobFilter, include_sequence_attributes: bool = False
    ) -> Set[Job]:
        """
        Find recursively all jobs matching one or more criterias.

        This sequence is looking for all its jobs and recursively for all jobs of
        its sub-sequences for jobs matching the `job_filters`. A job must match all
        criterias of a job_filter but must match at least one job_filter to be in the
        set of jobs returned. Or in other words, a job must match all criterias of at
        least one job_filter.

        Args:
            *job_filters (JobFilter): One or more filters to select the jobs returned.
            include_sequence_attributes (bool): **IMPORTANT!** This flag affect the result.
                When set to `True`, when matching jobs to the `job_filters` also attributes
                inherited from parent sequences, where the job resides, in were considered. On the
                one hand this makes the search for jobs more natural, as you are looking for
                jobs like they were in the final yaml output. On the other hand it might be
                confusing that the jobs returned from the search are not containing the attributes
                you used when searching for that jobs. That is because those attributes
                are then inherited from parent sequences and not contained in the job itself.
                **ATTENTION:** Imagine two sequences contain the identical (not equal!) job object. In the resulting
                yaml pipeline this job is contained twice, but with different attributes, he inherits
                from his sequences. If you find and modify this job by the attributes of only one of
                its sequences. Nevertheless when editing the job, the changes will be made on the
                identical job object of both sequences. So you might only want to search and replace
                an attribute of only one resulting job in the final yaml pipeline, but in fact set the
                attributes for both resulting jobs, as you set the attribute on the job and not the
                sequence.
                If you only want to search jobs by attributes the jobs really have, then you have
                to set that flag to `False`. In this case the result may be confusing, because
                you might miss jobs in the result that clearly have attributes you are looking for
                in the final yaml pipeline. This is when those jobs only inherit those attributes
                from their parent pipelines.
                Because of the fact, that you accidentially modify two resulting jobs in the final
                yaml pipeline, by editing the identical job object contained in different sequences,
                the default value of `include_sequence_attributes` is `False`. When you set it to
                `True` you have to consider this fact.

        Returns:
            Set[Job]: The set contains all jobs, that match all criterias of at least
                one job filter.
        """
        jobs: Set[Job] = set()

        if include_sequence_attributes:
            for job in self.populated_jobs:
                for filter in job_filters:
                    if filter.match(job):
                        if job._original:
                            jobs.add(job._original)
                        else:
                            raise AttributeError(
                                "job._original is None, because the job is not a copy of another job"
                            )
        else:
            for item in self._children:
                child = item["child"]
                if isinstance(child, Job):
                    for filter in job_filters:
                        if filter.match(child):
                            jobs.add(child)
                elif isinstance(child, Sequence):
                    jobs.update(
                        child.find_jobs(
                            *job_filters,
                            include_sequence_attributes=include_sequence_attributes,
                        )
                    )
                else:
                    raise TypeError(
                        f"child in self._children is of wront type: {type(child)}"
                    )
        return jobs

    @property
    def nested_jobs(self) -> List[Job]:
        """Returns all jobs of this this sequences as well as jobs of sub-sequences recursively."""
        all_jobs: List[Job] = []
        for item in self._children:
            child = item["child"]
            if isinstance(child, Job):
                all_jobs.append(child)
            elif isinstance(child, Sequence):
                all_jobs.extend(child.nested_jobs)
            else:
                raise ValueError(
                    f"Unexpected error. Sequence child is of unknown type '{type(child)}'."
                )
        return all_jobs

    @property
    def populated_jobs(self) -> List[Job]:
        """Returns a list with populated copies of all nested jobs of this sequence.

        Populated means, that all attributes of a Job which depends on its context are resolved
        to their final values. The context is primarily the sequence within the jobs resides but
        also dependencies to other jobs and sequences. Thus this sequence will apply its own
        configuration, like variables to add, tags to set, etc., to all its jobs and sequences.

        Copies means what it says, that the returned job are not the same job objects, originally
        added to this sequence, but copies of them.

        Nested means, that also jobs from sequences within this sequence, are returned, as well
        as jobs from sequences within sequences within this sequence and so on.

        Returns:
            List[Job]: A list of copies of all nested jobs of this sequence with their final attribute values.
        """
        all_jobs: List[Job] = []
        for item in self._children:
            child = item["child"]
            child_name = item["name"]
            child_stage = item["stage"]
            if isinstance(child, Sequence):
                for job_copy in child.populated_jobs:
                    job_copy._extend_stage(child_stage)
                    job_copy._extend_name(child_name)
                    all_jobs.append(job_copy)
            elif isinstance(child, Job):
                job_copy = child._copy()
                job_copy._extend_stage(child_stage)
                job_copy._extend_name(child_name)
                all_jobs.append(job_copy)

        if all_jobs:
            first_job = all_jobs[0]
            if self._needs_for_initialization is not None and first_job._needs is None:
                first_job.set_needs(copy.deepcopy(self._needs_for_initialization))
            if self._needs_for_replacement is not None:
                first_job.set_needs(copy.deepcopy(self._needs_for_replacement))
            if self._needs is not None:
                first_job.add_needs(*copy.deepcopy(self._needs))
            for job in all_jobs[1:]:
                if job._stage == first_job.stage:
                    if (
                        self._needs_for_initialization is not None
                        and job._needs is None
                    ):
                        job.set_needs(copy.deepcopy(self._needs_for_initialization))
                    if self._needs_for_replacement is not None:
                        job.set_needs(copy.deepcopy(self._needs_for_replacement))
                    if self._needs is not None:
                        job.add_needs(*copy.deepcopy(self._needs))

        for job in all_jobs:
            if self._image_for_initialization and not job._image:
                job.set_image(copy.deepcopy(self._image_for_initialization))
            if self._image_for_replacement:
                job.set_image(copy.deepcopy(self._image_for_replacement))

            if self._environment_for_initialization and not job._environment:
                job.set_environment(copy.deepcopy(self._environment_for_initialization))
            if self._environment_for_replacement:
                job.set_environment(copy.deepcopy(self._environment_for_replacement))

            if self._retry_for_initialization and not job._retry:
                job.set_retry(copy.deepcopy(self._retry_for_initialization))
            if self._retry_for_replacement:
                job.set_retry(copy.deepcopy(self._retry_for_replacement))

            if self._when_for_initialization and not job._when:
                job.set_when(copy.deepcopy(self._when_for_initialization))
            if self._when_for_replacement:
                job.set_when(copy.deepcopy(self._when_for_replacement))

            if self._timeout_for_initialization and not job._timeout:
                job.set_timeout(self._timeout_for_initialization)
            if self._timeout_for_replacement:
                job.set_timeout(self._timeout_for_replacement)

            if self._resource_group_for_initialization and not job._resource_group:
                job.set_resource_group(self._resource_group_for_initialization)
            if self._resource_group_for_replacement:
                job.set_resource_group(self._resource_group_for_replacement)

            if (
                self._allow_failure_for_initialization != "untouched"
                and job._allow_failure == "untouched"
            ):
                job._allow_failure = self._allow_failure_for_initialization
            if self._allow_failure_for_replacement != "untouched":
                job._allow_failure = self._allow_failure_for_replacement

            if self._variables_for_initialization and not job._variables:
                job._variables = copy.deepcopy(self._variables_for_initialization)
            if self._variables_for_replacement:
                job._variables = copy.deepcopy(self._variables_for_replacement)
            job.add_variables(**copy.deepcopy(self._variables))

            if self._cache_for_initialization and not job._cache:
                job._cache = copy.deepcopy(self._cache_for_initialization)
            job.set_cache(copy.deepcopy(self._cache))

            if self._artifacts_for_initialization and (
                not job.artifacts.paths and not job.artifacts.reports
            ):
                job._artifacts = copy.deepcopy(self._artifacts_for_initialization)
            if self._artifacts_for_replacement:
                job._artifacts = copy.deepcopy(self._artifacts_for_replacement)
            job.set_artifacts(copy.deepcopy(self._artifacts))

            if (
                self._dependencies_for_initialization is not None
                and job._dependencies is None
            ):
                job.set_dependencies(
                    copy.deepcopy(self._dependencies_for_initialization)
                )
            if self._dependencies_for_replacement is not None:
                job.set_dependencies(copy.deepcopy(self._dependencies_for_replacement))
            if self._dependencies is not None:
                job.add_dependencies(*copy.deepcopy(self._dependencies))

            if self._tags_for_initialization and not job._tags:
                job._tags = copy.deepcopy(self._tags_for_initialization)
            if self._tags_for_replacement:
                job._tags = copy.deepcopy(self._tags_for_replacement)
            job.add_tags(*list(copy.deepcopy(self._tags).keys()))

            if self._rules_for_initialization and not job._rules:
                job._rules = copy.deepcopy(self._rules_for_initialization)
            if self._rules_for_replacement:
                job._rules = copy.deepcopy(self._rules_for_replacement)
            job.append_rules(*copy.deepcopy(self._rules_to_append))
            job.prepend_rules(*copy.deepcopy(self._rules_to_prepend))

            job.prepend_scripts(*copy.deepcopy(self._scripts_to_prepend))
            job.append_scripts(*copy.deepcopy(self._scripts_to_append))

        return all_jobs

Subclasses

Instance variables

prop last_jobs_executed : List[Job]

This property returns all Jobs from the last stage of this sequence.

This is typically be requested from a job which has setup this sequence as need, to determine all actual jobs of this sequence as need.

Expand source code
@property
def last_jobs_executed(self) -> List[Job]:
    """This property returns all Jobs from the last stage of this sequence.

    This is typically be requested from a job which has setup this sequence as need,
    to determine all actual jobs of this sequence as need.
    """
    all_jobs = self.populated_jobs
    stages: Dict[str, None] = {}
    for job in all_jobs:
        # use the keys of dictionary as ordered set
        stages[job.stage] = None

    last_stage = list(stages.keys())[-1]
    last_executed_jobs: List[Job] = list()
    for job in all_jobs:
        if job._stage == last_stage:
            if job._original:
                last_executed_jobs.append(job._original)
            else:
                raise AttributeError(
                    "job._original is None, because the job is not a copy of another job"
                )

    return last_executed_jobs
prop nested_jobs : List[Job]

Returns all jobs of this this sequences as well as jobs of sub-sequences recursively.

Expand source code
@property
def nested_jobs(self) -> List[Job]:
    """Returns all jobs of this this sequences as well as jobs of sub-sequences recursively."""
    all_jobs: List[Job] = []
    for item in self._children:
        child = item["child"]
        if isinstance(child, Job):
            all_jobs.append(child)
        elif isinstance(child, Sequence):
            all_jobs.extend(child.nested_jobs)
        else:
            raise ValueError(
                f"Unexpected error. Sequence child is of unknown type '{type(child)}'."
            )
    return all_jobs
prop populated_jobs : List[Job]

Returns a list with populated copies of all nested jobs of this sequence.

Populated means, that all attributes of a Job which depends on its context are resolved to their final values. The context is primarily the sequence within the jobs resides but also dependencies to other jobs and sequences. Thus this sequence will apply its own configuration, like variables to add, tags to set, etc., to all its jobs and sequences.

Copies means what it says, that the returned job are not the same job objects, originally added to this sequence, but copies of them.

Nested means, that also jobs from sequences within this sequence, are returned, as well as jobs from sequences within sequences within this sequence and so on.

Returns

List[Job]
A list of copies of all nested jobs of this sequence with their final attribute values.
Expand source code
@property
def populated_jobs(self) -> List[Job]:
    """Returns a list with populated copies of all nested jobs of this sequence.

    Populated means, that all attributes of a Job which depends on its context are resolved
    to their final values. The context is primarily the sequence within the jobs resides but
    also dependencies to other jobs and sequences. Thus this sequence will apply its own
    configuration, like variables to add, tags to set, etc., to all its jobs and sequences.

    Copies means what it says, that the returned job are not the same job objects, originally
    added to this sequence, but copies of them.

    Nested means, that also jobs from sequences within this sequence, are returned, as well
    as jobs from sequences within sequences within this sequence and so on.

    Returns:
        List[Job]: A list of copies of all nested jobs of this sequence with their final attribute values.
    """
    all_jobs: List[Job] = []
    for item in self._children:
        child = item["child"]
        child_name = item["name"]
        child_stage = item["stage"]
        if isinstance(child, Sequence):
            for job_copy in child.populated_jobs:
                job_copy._extend_stage(child_stage)
                job_copy._extend_name(child_name)
                all_jobs.append(job_copy)
        elif isinstance(child, Job):
            job_copy = child._copy()
            job_copy._extend_stage(child_stage)
            job_copy._extend_name(child_name)
            all_jobs.append(job_copy)

    if all_jobs:
        first_job = all_jobs[0]
        if self._needs_for_initialization is not None and first_job._needs is None:
            first_job.set_needs(copy.deepcopy(self._needs_for_initialization))
        if self._needs_for_replacement is not None:
            first_job.set_needs(copy.deepcopy(self._needs_for_replacement))
        if self._needs is not None:
            first_job.add_needs(*copy.deepcopy(self._needs))
        for job in all_jobs[1:]:
            if job._stage == first_job.stage:
                if (
                    self._needs_for_initialization is not None
                    and job._needs is None
                ):
                    job.set_needs(copy.deepcopy(self._needs_for_initialization))
                if self._needs_for_replacement is not None:
                    job.set_needs(copy.deepcopy(self._needs_for_replacement))
                if self._needs is not None:
                    job.add_needs(*copy.deepcopy(self._needs))

    for job in all_jobs:
        if self._image_for_initialization and not job._image:
            job.set_image(copy.deepcopy(self._image_for_initialization))
        if self._image_for_replacement:
            job.set_image(copy.deepcopy(self._image_for_replacement))

        if self._environment_for_initialization and not job._environment:
            job.set_environment(copy.deepcopy(self._environment_for_initialization))
        if self._environment_for_replacement:
            job.set_environment(copy.deepcopy(self._environment_for_replacement))

        if self._retry_for_initialization and not job._retry:
            job.set_retry(copy.deepcopy(self._retry_for_initialization))
        if self._retry_for_replacement:
            job.set_retry(copy.deepcopy(self._retry_for_replacement))

        if self._when_for_initialization and not job._when:
            job.set_when(copy.deepcopy(self._when_for_initialization))
        if self._when_for_replacement:
            job.set_when(copy.deepcopy(self._when_for_replacement))

        if self._timeout_for_initialization and not job._timeout:
            job.set_timeout(self._timeout_for_initialization)
        if self._timeout_for_replacement:
            job.set_timeout(self._timeout_for_replacement)

        if self._resource_group_for_initialization and not job._resource_group:
            job.set_resource_group(self._resource_group_for_initialization)
        if self._resource_group_for_replacement:
            job.set_resource_group(self._resource_group_for_replacement)

        if (
            self._allow_failure_for_initialization != "untouched"
            and job._allow_failure == "untouched"
        ):
            job._allow_failure = self._allow_failure_for_initialization
        if self._allow_failure_for_replacement != "untouched":
            job._allow_failure = self._allow_failure_for_replacement

        if self._variables_for_initialization and not job._variables:
            job._variables = copy.deepcopy(self._variables_for_initialization)
        if self._variables_for_replacement:
            job._variables = copy.deepcopy(self._variables_for_replacement)
        job.add_variables(**copy.deepcopy(self._variables))

        if self._cache_for_initialization and not job._cache:
            job._cache = copy.deepcopy(self._cache_for_initialization)
        job.set_cache(copy.deepcopy(self._cache))

        if self._artifacts_for_initialization and (
            not job.artifacts.paths and not job.artifacts.reports
        ):
            job._artifacts = copy.deepcopy(self._artifacts_for_initialization)
        if self._artifacts_for_replacement:
            job._artifacts = copy.deepcopy(self._artifacts_for_replacement)
        job.set_artifacts(copy.deepcopy(self._artifacts))

        if (
            self._dependencies_for_initialization is not None
            and job._dependencies is None
        ):
            job.set_dependencies(
                copy.deepcopy(self._dependencies_for_initialization)
            )
        if self._dependencies_for_replacement is not None:
            job.set_dependencies(copy.deepcopy(self._dependencies_for_replacement))
        if self._dependencies is not None:
            job.add_dependencies(*copy.deepcopy(self._dependencies))

        if self._tags_for_initialization and not job._tags:
            job._tags = copy.deepcopy(self._tags_for_initialization)
        if self._tags_for_replacement:
            job._tags = copy.deepcopy(self._tags_for_replacement)
        job.add_tags(*list(copy.deepcopy(self._tags).keys()))

        if self._rules_for_initialization and not job._rules:
            job._rules = copy.deepcopy(self._rules_for_initialization)
        if self._rules_for_replacement:
            job._rules = copy.deepcopy(self._rules_for_replacement)
        job.append_rules(*copy.deepcopy(self._rules_to_append))
        job.prepend_rules(*copy.deepcopy(self._rules_to_prepend))

        job.prepend_scripts(*copy.deepcopy(self._scripts_to_prepend))
        job.append_scripts(*copy.deepcopy(self._scripts_to_append))

    return all_jobs

Methods

def add_children(self, *jobs_or_sequences: Union[JobSequence], stage: Optional[str] = None, name: Optional[str] = None) ‑> Sequence

Add Jobs or other Sequences to this sequence.

Adding a child creates a copy of that child. You should provide a name or stage when adding children, to make them different from other places where they will be used.

Args

jobs_or_sequences : Union[Job, Sequence]
One or more jobs or sequences to be added to this sequence.
stage : Optional[str], optional
Adds a stages component to all children added. Defaults to None.
name : Optional[str], optional
Adds a name component to all children added. Defaults to None.

Returns

Sequence: The modified Sequence object.

def add_dependencies(self, *dependencies: Union[JobSequence]) ‑> Sequence

Calling Job.add_dependencies() to all jobs within the first stage of this sequence.

Returns

Sequence: The modified Sequence object.

def add_needs(self, *needs: Union[NeedJobSequence]) ‑> Sequence

Calling gcip.core.job.Job.add_need() to all jobs within the first stage of this sequence.

Returns

Sequence: The modified Sequence object.

def add_tags(self, *tags: str) ‑> Sequence

Calling Job.add_tags() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def add_variables(self, **variables: str) ‑> Sequence

Calling Job.add_variables() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def append_rules(self, *rules: Rule) ‑> Sequence

Calling Job.append_rules() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def append_scripts(self, *scripts: str) ‑> Sequence

Calling Job.append_scripts() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def find_jobs(self, *job_filters: JobFilter, include_sequence_attributes: bool = False) ‑> Set[Job]

Find recursively all jobs matching one or more criterias.

This sequence is looking for all its jobs and recursively for all jobs of its sub-sequences for jobs matching the job_filters. A job must match all criterias of a job_filter but must match at least one job_filter to be in the set of jobs returned. Or in other words, a job must match all criterias of at least one job_filter.

Args

*job_filters : JobFilter
One or more filters to select the jobs returned.
include_sequence_attributes : bool
IMPORTANT! This flag affect the result. When set to True, when matching jobs to the job_filters also attributes inherited from parent sequences, where the job resides, in were considered. On the one hand this makes the search for jobs more natural, as you are looking for jobs like they were in the final yaml output. On the other hand it might be confusing that the jobs returned from the search are not containing the attributes you used when searching for that jobs. That is because those attributes are then inherited from parent sequences and not contained in the job itself. ATTENTION: Imagine two sequences contain the identical (not equal!) job object. In the resulting yaml pipeline this job is contained twice, but with different attributes, he inherits from his sequences. If you find and modify this job by the attributes of only one of its sequences. Nevertheless when editing the job, the changes will be made on the identical job object of both sequences. So you might only want to search and replace an attribute of only one resulting job in the final yaml pipeline, but in fact set the attributes for both resulting jobs, as you set the attribute on the job and not the sequence. If you only want to search jobs by attributes the jobs really have, then you have to set that flag to False. In this case the result may be confusing, because you might miss jobs in the result that clearly have attributes you are looking for in the final yaml pipeline. This is when those jobs only inherit those attributes from their parent pipelines. Because of the fact, that you accidentially modify two resulting jobs in the final yaml pipeline, by editing the identical job object contained in different sequences, the default value of include_sequence_attributes is False. When you set it to True you have to consider this fact.

Returns

Set[Job]
The set contains all jobs, that match all criterias of at least one job filter.
def initialize_allow_failure(self, allow_failure: Optional[Union[bool, str, int, List[int]]]) ‑> Sequence

Calling Job.set_allow_failure() to all jobs within this sequence that haven't been set the allow_failure before.

Returns

Sequence: The modified Sequence object.

def initialize_artifacts(self, artifacts: Artifacts) ‑> Sequence

Sets Job.artifacts to all jobs within this sequence that haven't been set the artifacs before.

Returns

Sequence: The modified Sequence object.

def initialize_cache(self, cache: Cache) ‑> Sequence

Calling Job.set_cache() to all jobs within this sequence that haven't been set the cache before.

Returns

Sequence: The modified Sequence object.

def initialize_dependencies(self, *dependencies: Union[JobSequence]) ‑> Sequence

Calling Job.set_dependencies() to all jobs within the first stage of this sequence that haven't been added dependencies before. An empty parameter list means that jobs will get an empty dependency list and thus does not download artifacts by default.

Returns

Sequence: The modified Sequence object.

def initialize_environment(self, environment: Optional[Union[Environment, str]]) ‑> Sequence

Calling Job.set_environment() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def initialize_image(self, image: Union[Image, str]) ‑> Sequence

Calling Job.set_image() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def initialize_needs(self, *needs: Union[NeedJobSequence]) ‑> Sequence

Calling Job.set_needs() to all jobs within the first stage of this sequence that haven't been added needs before. An empty parameter list means that jobs will get an empty dependency list and thus does not depend on other jobs by default.

Returns

Sequence: The modified Sequence object.

def initialize_resource_group(self, resource_group: Optional[str]) ‑> Sequence

Calling Job.set_resource_group() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def initialize_retry(self, retry: Optional[Union[Retry, int]]) ‑> Sequence

Calling Job.set_retry() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def initialize_rules(self, *rules: Rule) ‑> Sequence

Calling Job.append_rules() to all jobs within this sequence that haven't been added rules before.

Returns

Sequence: The modified Sequence object.

def initialize_tags(self, *tags: str) ‑> Sequence

Calling Job.add_tags() to all jobs within this sequence that haven't been added tags before.

Returns

Sequence: The modified Sequence object.

def initialize_timeout(self, timeout: Optional[str]) ‑> Sequence

Calling Job.set_timeout() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def initialize_variables(self, **variables: str) ‑> Sequence

Calling Job.add_variables() to all jobs within this sequence that haven't been added variables before.

Returns

Sequence: The modified Sequence object.

def initialize_when(self, when: Optional[WhenStatement]) ‑> Sequence

Calling Job.set_when() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def override_allow_failure(self, allow_failure: Optional[Union[bool, str, int, List[int]]]) ‑> Sequence

Calling Job.set_allow_failure() to all jobs within this sequence overriding any previous set value.

Returns

Sequence: The modified Sequence object.

def override_artifacts(self, artifacts: Artifacts) ‑> Sequence

Calling Job.set_artifacts() to all jobs within this sequence and overriding any previously added artifacts to that jobs.

Returns

Sequence: The modified Sequence object.

def override_dependencies(self, *dependencies: Union[JobSequence]) ‑> Sequence

Calling Job.set_dependencies() to all jobs within the first stage of this sequence and overriding any previously added dependencies to that jobs. An empty parameter list means that jobs will get an empty dependency list and thus does not download artifacts.

Returns

Sequence: The modified Sequence object.

def override_environment(self, environment: Optional[Union[Environment, str]]) ‑> Sequence

Calling Job.set_environment() to all jobs within this sequence overriding any previous set value.

Returns

Sequence: The modified Sequence object.

def override_image(self, image: Union[Image, str]) ‑> Sequence

Calling Job.set_image() to all jobs within this sequence overriding any previous set value.

Returns

Sequence: The modified Sequence object.

def override_needs(self, *needs: Union[NeedJobSequence]) ‑> Sequence

Calling Job.set_needs() to all jobs within the first stage of this sequence and overriding any previously added needs to that jobs. An empty parameter list means that jobs will get an empty dependency list and thus does not depend on other jobs.

Returns

Sequence: The modified Sequence object.

def override_resource_group(self, resource_group: Optional[str]) ‑> Sequence

Calling Job.set_resource_group() to all jobs within this sequence overriding any previous set value.

Returns

Sequence: The modified Sequence object.

def override_retry(self, retry: Optional[Union[Retry, int]]) ‑> Sequence

Calling Job.set_retry() to all jobs within this sequence overriding any previous set value.

Returns

Sequence: The modified Sequence object.

def override_rules(self, *rules: Rule) ‑> Sequence

Calling gcip.core.job.Job.override_rules() to all jobs within this sequence and overriding any previously added rules to that jobs.

Returns

Sequence: The modified Sequence object.

def override_tags(self, *tags: str) ‑> Sequence

Calling Job.add_tags() to all jobs within this sequence and overriding any previously added tags to that jobs.

Returns

Sequence: The modified Sequence object.

def override_timeout(self, timeout: Optional[str]) ‑> Sequence

Calling Job.set_timeout() to all jobs within this sequence overriding any previous set value.

Returns

Sequence: The modified Sequence object.

def override_variables(self, **variables: str) ‑> Sequence

Calling Job.add_variables() to all jobs within this sequence and overriding any previously added variables to that jobs.

Returns

Sequence: The modified Sequence object.

def override_when(self, when: Optional[WhenStatement]) ‑> Sequence

Calling Job.set_when() to all jobs within this sequence overriding any previous set value.

Returns

Sequence: The modified Sequence object.

def prepend_rules(self, *rules: Rule) ‑> Sequence

Calling Job.prepend_rules() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def prepend_scripts(self, *scripts: str) ‑> Sequence

Calling Job.prepend_scripts() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def set_artifacts(self, artifacts: Artifacts) ‑> Sequence

Sets Job.artifacts to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

def set_cache(self, cache: Cache) ‑> Sequence

Calling Job.set_cache() to all jobs within this sequence.

Returns

Sequence: The modified Sequence object.

class Service (name: str)

ALPHA This class represents the Gitlab CI Service keyword.

Currently there is nothing more implemented than providing a service name. In general the service functionality currently isn't well implemented, as it is only available for Pipelines.

Expand source code
class Service:
    """**ALPHA** This class represents the Gitlab CI [Service](https://docs.gitlab.com/ee/ci/yaml/#services) keyword.

    Currently there is nothing more implemented than providing a service name. In general the `service` functionality
    currently isn't well implemented, as it is only available for `gcip.core.pipeline.Pipeline`s.
    """

    def __init__(self, name: str):
        self._name = name

    def render(self) -> str:
        """Return a representation of this Service object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Any]: A dictionary representing the service object in Gitlab CI.
        """
        return self._name

Methods

def render(self) ‑> str

Return a representation of this Service object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Any]
A dictionary representing the service object in Gitlab CI.
class TriggerJob (name: Optional[str] = None, stage: Optional[str] = None, project: Optional[str] = None, branch: Optional[str] = None, includes: Union[Include, List[Include], None] = None, strategy: Optional[TriggerStrategy] = None)

This class represents the trigger job.

Jobs with trigger can only use a limited set of keywords. For example, you can’t run commands with script.

Simple example:

trigger_job = TriggerJob(
    stage="trigger-other-job",
    project="myteam/other-project",
    branch="main",
    strategy=TriggerStrategy.DEPEND,
)
trigger_job.append_rules(rules.on_tags().never(), rules.on_main())

Args

project : Optional[str]
The full name of another Gitlab project to trigger (multi-project pipeline trigger) Mutually exclusive with includes. Defaults to None.
branch : Optional[str]
The branch of project the pipeline should be triggered of. Defaults to None.
includes : Optional[List[Include]]
Include a pipeline to trigger (Parent-child pipeline trigger) Mutually exclusiv with project. Defaults to None.
strategy : Optional[TriggerStrategy]
Determines if the result of this pipeline depends on the triggered downstream pipeline (use TriggerStrategy.DEPEND) or if just "fire and forget" the downstream pipeline (use None). Defaults to None.

Raises

ValueError
If both project and includes are given.
ValueError
When the limit of three child pipelines is exceeded. See https://docs.gitlab.com/ee/ci/parent_child_pipelines.html for more information.
Expand source code
class TriggerJob(Job):
    """This class represents the [trigger](https://docs.gitlab.com/ee/ci/yaml/#trigger) job.

    Jobs with trigger can only use a [limited set of keywords](https://docs.gitlab.com/ee/ci/multi_project_pipelines.html#limitations).
    For example, you can’t run commands with `script`.

    Simple example:

    ```python
    trigger_job = TriggerJob(
        stage="trigger-other-job",
        project="myteam/other-project",
        branch="main",
        strategy=TriggerStrategy.DEPEND,
    )
    trigger_job.append_rules(rules.on_tags().never(), rules.on_main())
    ```

    Args:
        project (Optional[str]): The full name of another Gitlab project to trigger (multi-project pipeline trigger)
            Mutually exclusive with `includes`. Defaults to None.
        branch (Optional[str]): The branch of `project` the pipeline should be triggered of. Defaults to None.
        includes (Optional[List[Include]]): Include a pipeline to trigger (Parent-child pipeline trigger)
            Mutually exclusiv with `project`. Defaults to None.
        strategy (Optional[TriggerStrategy]): Determines if the result of this pipeline depends on the triggered downstream pipeline
            (use `TriggerStrategy.DEPEND`) or if just "fire and forget" the downstream pipeline (use `None`). Defaults to None.

    Raises:
        ValueError: If both `project` and `includes` are given.
        ValueError: When the limit of three child pipelines is exceeded. See https://docs.gitlab.com/ee/ci/parent_child_pipelines.html
            for more information.
    """

    def __init__(
        self,
        name: Optional[str] = None,
        stage: Optional[str] = None,
        project: Optional[str] = None,
        branch: Optional[str] = None,
        includes: Union[Include, List[Include], None] = None,
        strategy: Optional[TriggerStrategy] = None,
    ) -> None:
        if includes and project:
            raise ValueError(
                (
                    "You cannot specify 'include' and 'project' together. Either 'include' or 'project' is possible."
                )
            )
        if not includes and not project:
            raise ValueError("Neither 'includes' nor 'project' is given.")

        super().__init__(name=name, stage=stage, script="none")

        self._project = project
        self._branch = branch
        self._strategy = strategy

        if not includes:
            self._includes = None
        elif isinstance(includes, Include):
            self._includes = [includes]
        elif isinstance(includes, list):
            if len(includes) > 3:
                raise ValueError(
                    (
                        "The length of 'includes' is limited to three."
                        "See https://docs.gitlab.com/ee/ci/parent_child_pipelines.html for more information."
                    )
                )
            self._includes = includes
        else:
            raise AttributeError(
                "script parameter must be of type string or list of strings"
            )

    def render(self) -> Dict[Any, Any]:
        """Return a representation of this TriggerJob object as dictionary with static values.

        The rendered representation is used by the gcip to dump it
        in YAML format as part of the .gitlab-ci.yml pipeline.

        Returns:
            Dict[str, Any]: A dictionary representing the trigger job object in Gitlab CI.
        """
        rendered_job = super().render()

        # remove unsupported keywords from TriggerJob
        rendered_job.pop("script")

        if "image" in rendered_job:
            rendered_job.pop("image")

        if "tags" in rendered_job:
            rendered_job.pop("tags")

        if "artifacts" in rendered_job:
            rendered_job.pop("artifacts")

        if "cache" in rendered_job:
            rendered_job.pop("cache")

        trigger: Dict[str, Union[str, List[Dict[str, str]]]] = {}

        # Child pipelines
        if self._includes:
            trigger.update(
                {
                    "include": [include.render() for include in self._includes],
                }
            )

        # Multiproject pipelines
        if self._project:
            trigger.update(
                {
                    "project": self._project,
                }
            )
            if self._branch:
                trigger.update({"branch": self._branch})

        if self._strategy:
            trigger.update({"strategy": self._strategy.value})

        rendered_job = {"trigger": trigger, **rendered_job}

        return rendered_job

Ancestors

Methods

def render(self) ‑> Dict[Any, Any]

Return a representation of this TriggerJob object as dictionary with static values.

The rendered representation is used by the gcip to dump it in YAML format as part of the .gitlab-ci.yml pipeline.

Returns

Dict[str, Any]
A dictionary representing the trigger job object in Gitlab CI.

Inherited members

class TriggerStrategy (*args, **kwds)

This class represents the trigger:strategy keyword.

Expand source code
class TriggerStrategy(Enum):
    """This class represents the [trigger:strategy](https://docs.gitlab.com/ee/ci/yaml/#linking-pipelines-with-triggerstrategy)
    keyword."""

    DEPEND = "depend"
    """Use this strategy to force the `TriggerJob` to wait for the downstream (multi-project or child) pipeline to complete."""

Ancestors

  • enum.Enum

Class variables

var DEPEND

Use this strategy to force the TriggerJob to wait for the downstream (multi-project or child) pipeline to complete.

class WhenStatement (*args, **kwds)

This enum holds different when statements for Rules.

Expand source code
class WhenStatement(Enum):
    """This enum holds different [when](https://docs.gitlab.com/ee/ci/yaml/#when) statements for `Rule`s."""

    ALWAYS = "always"
    DELAYED = "delayed"
    MANUAL = "manual"
    NEVER = "never"
    ON_FAILURE = "on_failure"
    ON_SUCCESS = "on_success"

Ancestors

  • enum.Enum

Class variables

var ALWAYS
var DELAYED
var MANUAL
var NEVER
var ON_FAILURE
var ON_SUCCESS