standing desk with a monitor, laptop etc
Driver on
die-welt.internethttps://www.die-welt.internet/a damaged world by Evgeni GolovenContents © 2020 evgeni Fri, 24 Jul 2020 08:11:43 GMTNikola (getnikola.com)http://blogs.regulation.harvard.edu/tech/rssConstructing documentation for Ansible Collections utilizing antsibullhttps://www.die-welt.internet/2020/07/building-documentation-for-ansible-collections-using-antsibull/evgeni

In my current publish about constructing and publishing documentation for Ansible Collections, I’ve talked about that the Ansible Neighborhood is presently within the course of of constructing their construct instruments out there as a separate challenge known as antsibull as a substitute of holding them within the hacking listing of ansible.git.

I’ve additionally stated that I could not get the documentation to construct with antsibull-docs because it would not help collections but. Fortunately, Felix Fontein, one of many maintainers of antsibull, identified that I used to be flawed and later variations of antsibull even have partial collections help. So I went forward and tried it once more.

And what ought to I say? Two bug experiences by me and 4 patches by Felix Fontain later I can use antsibull-docs to generate the Foreman Ansible Modules documentation!

Let’s have a look at what’s wanted as a substitute of the ugly hack intimately.

We clearly need not clone ansible.git anymore and set up its necessities manually. As a substitute we are able to simply set up antsibull (0.17.Zero incorporates all of the above patches). We additionally want Ansible (or ansible-base) 2.10 or by no means, which presently solely exists as a pre-release. 2.10 is the primary model that has an ansible-doc that may record contents of a group, which antsibull-docs requires to work correctly.

The present implementation of collections documentation in antsibull-docs requires the gathering to be put in as in “Ansible can discover it”. We had the identical requirement earlier than to seek out the documentation fragments and might simply re-use the set up we do for numerous different construct duties in construct/assortment and level at it utilizing the ANSIBLE_COLLECTIONS_PATHS atmosphere variable or the collections_paths setting in ansible.cfg1. After that, it is solely a matter of passing --use-current to make it choose up put in collections as a substitute of making an attempt to fetch and parse them itself.

Given the principle aim of antisibull-docs assortment is to construct documentation for a number of collections directly, it defaults to put the generated recordsdata into <dest-dir>/collections/<namespace>/<assortment>. Nonetheless, we solely construct documentation for one assortment and thus go --squash-hierarchy to keep away from this longish path and make it generate documentation immediately in <dest-dir>. Because of Felix for implementing this characteristic for us!

And that is it! We will generate our documentation with a single line now!

antsibull-docs assortment --use-current --squash-hierarchy --dest-dir ./construct/plugin_docs theforeman.foreman

The PR to modify to antsibull is open for evaluate and I hope to get merged in quickly!

Oh and you already know what’s cool? The documentation is now additionally out there as a preview on ansible.com!


  1. Sure, the trails model of that setting is deprecated in 2.10, however as we help older Ansible variations, we nonetheless use it. ↩

ansibleenglishlinuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/07/building-documentation-for-ansible-collections-using-antsibull/Fri, 24 Jul 2020 08:01:10 GMTConstructing and publishing documentation for Ansible Collectionshttps://www.die-welt.internet/2020/07/building-and-publishing-documentation-for-ansible-collections/evgeni

I had a draft of this text for about two months, however by no means actually managed to shine and finalize it, partially because of some nasty hacks wanted down the highway. Fortunately, certainly one of my needs was heard and I had now the possibility to revisit the publish and take a look at a couple of issues out. Sadly, my want was granted solely partially and the outcome continues to be not stunning, however learn your self 😉

UPDATE: I’ve printed a observe up publish on constructing documentation for Ansible Collections utilizing antsibull, as my want was now totally granted.

As a part of my day job, I’m sustaining the Foreman Ansible Modules – a group of modules to work together with Foreman and its plugins (most notably Katello). We have been sustaining this assortment (as in set of modules) since 2017, a lot longer than collections (as in Ansible Collections) existed, however the introduction of Ansible Collections allowed us to supply a a lot simpler and supported technique to distribute the modules to our customers.

Now customers normally need two issues: options and documentation. Options are straightforward, we have already got loads of them. However documentation was a bit cumbersome: we had documentation contained in the modules, so you can learn it through ansible-doc on the command line should you had the gathering put in, however we needed to supply on-line readable and versioned documentation too – one thing the customers are used to from the official Ansible documentation.

Constructing HTML from Ansible modules

Ansible modules include documentation in type of YAML blocks documenting the parameters, examples and return values of the module. The Ansible documentation web site is constructed utilizing Sphinx from reStructuredText. Because the modules do not include reStructuredText, Ansible hashad a device to generate it from the documentation YAML: build-ansible.py document-plugins. The device and the accompanying libraries usually are not a part of the Ansible distribution – they only dwell within the hacking listing. To run them we want a git checkout of Ansible and supply hacking/env-setup to set PYTHONPATH and some different variables accurately for Ansible to run immediately from that checkout.

It will be good if that’d be a characteristic of ansible-doc, however whereas it is not, we have to have a full Ansible git checkout to have the ability to proceed.The device has been lately break up out into an personal repository/distribution: antsibull. Nonetheless it was additionally a bit redesigned to be simpler to make use of (good!), and my hack to abuse it to construct documentation for out-of-tree modules would not work anymore (unhealthy!). There is a matter open for collections help, so I hope to have the ability to swap to antsibull quickly.

Anyhow, again to the unique hack.

As we’re utilizing documentation fragments, we have to inform the device to search for these, as a result of in any other case we would get errors about not discovered fragments.
We’re passing ANSIBLE_COLLECTIONS_PATHS in order that the device can discover the right, namespaced documentation fragments there.
We additionally want to supply --module-dir pointing on the precise modules we need to construct documentation for.

ANSIBLEGIT=/path/to/ansible.git
supply ${ANSIBLEGIT}/hacking/env-setup
ANSIBLE_COLLECTIONS_PATHS=../construct/collections python3 ${ANSIBLEGIT}/hacking/build-ansible.py document-plugins --module-dir ../plugins/modules --template-dir ./_templates --template-dir ${ANSIBLEGIT}/docs/templates --type rst --output-dir ./modules/

Ideally, when antsibull helps collections, this may develop into antsibull-docs assortment … with none have to have an Ansible checkout, sourcing env-setup or go tons of paths.

Till then we’ve a Makefile that clones Ansible, runs the above command after which calls Sphinx (which supplies a pleasant Makefile for constructing) to generate HTML from the reStructuredText.

You could find our barely modified templates and themes in our git repository within the docs listing.

Publishing HTML documentation for Ansible Modules

Now that we’ve a technique to construct the documentation, let’s additionally automate publishing, as a result of nothing is worse than out-of-date documentation!

We’re utilizing GitHub and GitHub Actions for that, however you’ll be able to obtain the identical with GitLab, TravisCI or Jenkins.

First, we want a set off. As we wish all the time up-to-date documentation for the principle department the place all the event occurs and in addition documentation for all steady releases which are tagged (we use vX.Y.Z for the tags), we are able to do one thing like this:

on:
  push:
    tags:
      - v[0-9]+.[0-9]+.[0-9]+
    branches:
      - grasp

Now that we’ve a set off, we outline the job steps that get executed:

    steps:
      - identify: Take a look at the code
        makes use of: actions/[email protected]
      - identify: Arrange Python
        makes use of: actions/[email protected]
        with:
          python-version: "3.7"
      - identify: Set up dependencies
        run: make doc-setup
      - identify: Construct docs
        run: make doc

At this level we may have the docs constructed by make doc within the docs/_build/html listing, however not printed wherever but.

As we’re utilizing GitHub anyhow, we are able to additionally use GitHub Pages to host the outcome.

      - makes use of: actions/[email protected]
      - identify: configure git
        run: |
          git config person.identify "${GITHUB_ACTOR}"
          git config person.e-mail "${GITHUB_ACTOR}@bots.github.com"
          git fetch --no-tags --prune --depth=1 origin +refs/heads/*:refs/remotes/origin/*
      - identify: Arrange Python
        makes use of: actions/[email protected]
        with:
          python-version: "3.7"
      - identify: Set up dependencies
        run: make doc-setup
      - identify: Construct docs
        run: make doc
      - identify: commit docs
        run: |
          git checkout gh-pages
          rm -rf $(basename ${GITHUB_REF})
          mv docs/_build/html $(basename ${GITHUB_REF})
          dirname */index.html | type --version-sort | xargs [email protected]@ -n1 echo '<div><a href="@@/"><p>@@</p></a></div>' >> index.html
          git add $(basename ${GITHUB_REF}) index.html
          git commit -m "replace docs for $(basename ${GITHUB_REF})" || true
      - identify: push docs
        run: git push origin gh-pages

As this isn’t precisely self explanatory:

  1. Configure git to have a correct creator identify and e-mail, as in any other case you get ugly historical past and possibly even failing commits
  2. Fetch all department names, because the checkout motion by default would not do that.
  3. Setup Python, Sphinx, Ansible and many others.
  4. Construct the documentation as described above.
  5. Change to the gh-pages department from the commit that triggered the workflow.
  6. Take away any current documentation for this tag/department ($GITHUB_REF incorporates the identify which triggered the workflow) if it exists already.
  7. Transfer the beforehand constructed documentation from the Sphinx output listing to a listing named after the present goal.
  8. Generate a easy index of all out there documentation variations.
  9. Commit all adjustments, however do not fail if there may be nothing to commit.
  10. Push to the gh-pages department which can set off a GitHub Pages deployment.

Fairly certain this would possibly not win any magnificence contest for scripting and automation, however it will get the job completed and no one on the staff has to recollect to replace the documentation anymore.

You possibly can see the outcomes on theforeman.org or immediately on GitHub.

ansibleenglishlinuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/07/building-and-publishing-documentation-for-ansible-collections/Mon, 20 Jul 2020 19:17:16 GMTScanning with a Brother MFC-L2720DW on Linux with none binary blobshttps://www.die-welt.internet/2020/07/scanning-with-a-brother-mfc-l2720dw-on-linux-without-any-binary-blobs/evgeni

Again in 2015, I’ve obtained a Brother MFC-L2720DW for the informal “I have to print these two pages” and “I have to scan these receipts” at house (and home-office ;)). It is a quite low cost (I paid lower than 200€ in 2015) monochrome laser printer, scanner and fax with a (properly, two, wired and wi-fi) community interface. In these 5 years I’ve by no means used the fax or WiFi features, however printed a scanned a couple of pages.

Brother gives Linux drivers, however these are binary blobs which I by no means actually preferred to run.

The printer half works simply high quality with a “Generic PCL 6/PCL XL” driver in CUPS and even “driverless” through AirPrint on Linux. You can too feed it plain PostScript, however I discovered it quite gradual in comparison with PCL. On current Androids it really works utilizing the in-built printer service or Mopria Printer Service for older ones – I used to joke “why would you want a printer in your cellphone?!”, however discovered it fairly helpful after a couple of tries.

Nonetheless, for the scanner half I had to make use of Brother’s brscan4 driver on Linux and their iPrint&Scan app on Android – Mopria Scan would not help it.

Till, final Friday, I’ve seen a NEW bundle being uploaded to Debian: sane-airscan. And sure, monitoring the Debian NEW queue through Twitter is completely legit!

sane-airscan is an implementation of Apple’s AirScan (eSCL) and Microsoft’s WSD/WS-Scan protocols for SANE. I’ve by no means heard of these earlier than – solely about AirPrint, however fortunately this doesn’t imply no one has reverse-engineered them and created one thing that works fantastically on Linux. As of right this moment there are not any packages within the official Fedora repositories and the Debian ones are nonetheless in NEW, nonetheless the upstream documentation refers to an openSUSE OBS repository that works like a appeal within the meantime (on Fedora 32).

The one disadvantage I’ve seen thus far: the scanner solely works in “Colour” mode and there’s no technique to scan in “Grayscale”, making scanning a tad slower. This has been reported upstream and may or may not be fixable, because it appears the system doesn’t announce any mode in addition to “Colour”.

Apparently, SANE has an eSCL backend by itself since 1.0.29, however it’s disabled in Fedora in favor of sane-airscan despite the fact that the later is not out there in Fedora but. Nonetheless, it may not even want separate packaging, as SANE upstream is planning to combine it into sane-backends immediately.

englishlinuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/07/scanning-with-a-brother-mfc-l2720dw-on-linux-without-any-binary-blobs/Wed, 15 Jul 2020 19:08:05 GMTUtilizing Ansible Molecule to check roles in monoreposhttps://www.die-welt.internet/2020/07/using-ansible-molecule-to-test-roles-in-monorepos/evgeni

Ansible Molecule is a toolkit for testing Ansible roles. It permits for straightforward execution and verification of your roles and in addition manages the atmosphere (container, VM, and many others) by which these are executed.

Within the Foreman challenge we’ve a group of Ansible roles to setup Foreman cases known as forklift. The roles differ from configuring Libvirt and Vagrant for our CI to deploying full fledged Foreman and Katello setups with Proxies and all the pieces. The repository additionally incorporates a dynamic Vagrant file that may generate Foreman and Katello installations on all supported Debian, Ubuntu and CentOS platforms utilizing the beforehand talked about roles. This characteristic is tremendous useful when it’s worthwhile to debug one thing particular to an OS/model mixture.

Up till lately, all these roles did not have any checks. We’d run ansible-lint on them, however that was it.

As I’m planning on doing some heavier work on among the roles to reinforce our improve testing, I made a decision so as to add some checks first. Utilizing Molecule, after all.

Including Molecule to an current position is simple: molecule init state of affairs -r my-role-name will add all the mandatory recordsdata/examples for you. It is left as an train to the reader tips on how to truly check the position correctly as this isn’t what this publish is about.

Executing the checks with Molecule can be straightforward: molecule check. And there are additionally examples tips on how to combine the check execution with the widespread CI programs.

However what occurs in case you have a couple of position within the repository? Molecule has help for monorepos, nonetheless that’s quite restricted: it’s going to detect the position path accurately, so roles can rely on different roles from the identical repository, however it will not discover and execute checks for roles should you run it from the repository root. There may be an undocumented technique to set MOLECULE_GLOB in order that Molecule would detect check situations in several paths, however I could not get it to work properly for executing checks of a number of roles and upstream presently doesn’t plan to implement this. Properly, bash to the rescue!

for roledir in roles/*/molecule; do
    pushd $(dirname $roledir)
    molecule check
    popd
completed

Add that to your CI and be completely happy! The CI will execute all out there checks and you may nonetheless execute these for the position you are hacking on by simply calling molecule check as you are used to.

Nonetheless, we are able to do even higher.

While you initialize a job with Molecule or add Molecule to an current position, there are various recordsdata added within the molecule listing plus an yamllint configuration within the position root. When you have many roles, you’ll discover that particularly the molecule.yml and .yamllint recordsdata look very related for every position.

It will be a lot nicer if we might preserve these in a shared place.

Molecule helps a “base config”: a configuration file that will get merged with the molecule.yml of your challenge. By default, that is ~/.config/molecule/config.yml, however Molecule will truly search for a .config/molecule/config.yml in two locations: the basis of the VCS repository and your HOME. And guess what? The one within the repository wins (that is not but properly documented). So by including a .config/molecule/config.yml to the repository, we are able to place all shared configuration there and do not need to duplicate it in each position.

And that .yamllint file? We will additionally transfer that to the repository root and add the next to Molecule’s (now shared) configuration:

lint: yamllint --config-file ${MOLECULE_PROJECT_DIRECTORY}/../../.yamllint --format parsable .

This can outline the lint motion as calling yamllint with the configuration saved within the repository root as a substitute of the challenge listing, assuming you retailer your roles as roles/<rolename>/ within the repository.

And that is it. We now have a central place for our Molecule and yamllint configurations and solely want to put role-specific knowledge into the position listing.

ansibleenglishlinuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/07/using-ansible-molecule-to-test-roles-in-monorepos/Solar, 12 Jul 2020 08:03:17 GMTRobotically renaming the default git department to “devel”https://www.die-welt.internet/2020/07/automatically-renaming-the-default-git-branch-to-devel/evgeni

It appears GitHub is planning to rename the default brach for newly created repositories from “grasp” to “predominant”. It is unimaginable how a lot constructive PR you may get with a one line configuration change, whereas nonetheless working along with the ICE.

Nonetheless, this publish just isn’t about bashing GitHub.

Altering the default department for newly created repositories is sweet. And also you additionally ought to try this for those you create with git init domestically. However what about all of the repositories on the market? GitHub absolutely will not force-rename these branches, however we are able to!

Ian will do that as he touches the person repositories, however I are inclined to neglect issues except I do them instantly…

Oh, so that is one other “automate all the pieces with an API” publish? Sure, sure it’s!

And sure, I’m going to make use of GitHub right here, however one thing related needs to be implementable on any git internet hosting platform that has an API.

After all, in case you have SSH entry to the repositories, you too can simply edit HEAD in an for loop in bash, however that may be boring 😉

I am going with devel btw, as I am already used to develop within the Foreman challenge and devel in Ansible.

purchase credentials

My GitHub account is 2FA enabled, so I can not simply use my username and password in a primary HTTP API consumer. So step one is to amass a private entry token, that can be utilized as a substitute. After all I might even have carried out OAuth2 in my awful script, however ain’t no one have time for that.

The token would require the “repo” permission to have the ability to change repositories.

And we’ll want some boilerplate code (I am utilizing Python3 and requests, however the rest will work too):

#!/usr/bin/env python3

import requests

BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcdef'

headers = {'Consumer-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)

session = requests.Session()
session.auth = auth
session.headers.replace(headers)
session.confirm = True

This can retailer our username, token, and create a requests.Session in order that we do not have to go the identical knowledge on a regular basis.

get a listing of repositories to vary

I need to change all my very own repos that aren’t archived, not forks, and really have the default department set to grasp, YMMV.

As we’re authenticated, we are able to simply record the repositories of the presently authenticated person, and restrict them to “proprietor” solely.

GitHub makes use of pagination for his or her API, so we’ll need to loop till we get to the top of the repository record.

repos_to_change = []

url = '{}/person/repos?sort=proprietor'.format(BASE)
whereas url:
    r = session.get(url)
    if r.okay:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'grasp':
                repos_to_change.append(repo['name'])
        if 'subsequent' in r.hyperlinks:
            url = r.hyperlinks['next']['url']
        else:
            url = None
    else:
        url = None

create a brand new devel department and mark it as default

Now that we all know which repos to vary, we have to fetch the SHA of the present grasp, create a brand new devel department pointing on the identical commit after which set that new department because the default department.

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/grasp'.format(BASE, repo)).json()
    knowledge = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.publish('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=knowledge)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'grasp'))

I’ve additionally opted in to really delete the outdated grasp, as I believe that is the most secure technique to let the customers know that it is gone. Letting it rot within the repository would imply individuals can nonetheless pull and will not discover that there are not any adjustments anymore because the default department moved to devel.

So…

announcement

I’ve up to date all my (these within the evgeni namespace) non-archived repositories to have devel as a substitute of grasp because the default department.

Have enjoyable updating!

code

#!/usr/bin/env python3

import requests

BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcd'

headers = {'Consumer-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)

session = requests.Session()
session.auth = auth
session.headers.replace(headers)
session.confirm = True

repos_to_change = []

url = '{}/person/repos?sort=proprietor'.format(BASE)
whereas url:
    r = session.get(url)
    if r.okay:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'grasp':
                repos_to_change.append(repo['name'])
        if 'subsequent' in r.hyperlinks:
            url = r.hyperlinks['next']['url']
        else:
            url = None
    else:
        url = None

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/grasp'.format(BASE, repo)).json()
    knowledge = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.publish('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=knowledge)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'grasp'))

englishlinuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/07/automatically-renaming-the-default-git-branch-to-devel/Thu, 02 Jul 2020 07:12:21 GMTmass-migrating modules inside an Ansible Assortmenthttps://www.die-welt.internet/2020/06/mass-migrating-modules-inside-an-ansible-collection/evgeni

Within the Foreman challenge, we have been sustaining a group of Ansible modules to handle Foreman installations since 2017. That’s, 2 years earlier than Ansible had the idea of collections in any respect.

For that you simply needed to set library (and later module_utils and doc_fragment_plugins) in ansible.cfg and successfully inject our modules, their helpers and documentation fragments into the principle Ansible namespace. Not the cleanest resolution, however it labored quiet properly for us.

When Ansible began introducing Collections, we rapidly joined, as the concept of namespaced, simply distributable and usable content material models was nice and precisely matched what we had in thoughts.

Nonetheless, collections are solely usable in Ansible 2.8, or truly 2.9 as 2.Eight can eat them, however tooling round constructing and putting in them is missing. Due to that we have been holding our modules usable exterior of a group.

Till lately, once we determined it is time to transfer on, drop that compatibility (which costed a couple of complications over the time) and launch a shiny 1.0.0.

One of many adjustments we needed for 1.0.Zero is renaming a couple of modules. Traditionally we had the module names prefixed with foreman_ and katello_, relying whether or not they have been designed to work with Foreman (and plugins) or Katello (which is technically a Foreman plugin, however has a far more sophisticated deployment and presently cannot be simply added to an current Foreman setup). This made sense so long as we have been injecting into the principle Ansible namespace, however with collections the names be turned theforeman.foreman.foreman_ <one thing> and whereas all of us love Foreman, that was a bit an excessive amount of. So we needed to drop that prefix. And whereas at it, additionally change another names (like ptable, which turned partition_table) to be extra readable.

However how? There isn’t a tooling that may rename all recordsdata accordingly, regulate examples and checks. Properly, bash to the rescue! I am normally not a giant fan of bash scripts, however renaming recordsdata, looking out and changing strings? That completely suits!

To start with we want a manner map the outdated identify to the brand new identify. Usually it is simply “drop the prefix”, for the others you’ll be able to have some if/elif/fi:

prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
if [[ ${old_name} == 'foreman_environment' ]]; then
  new_name='puppet_environment'
elif [[ ${old_name} == 'katello_sync' ]]; then
  new_name='repository_sync'
elif [[ ${old_name} == 'katello_upload' ]]; then
  new_name='content_upload'
elif [[ ${old_name} == 'foreman_ptable' ]]; then
  new_name='partition_table'
elif [[ ${old_name} == 'foreman_search_facts' ]]; then
  new_name='resource_info'
elif [[ ${old_name} == 'katello_manifest' ]]; then
  new_name='subscription_manifest'
elif [[ ${old_name} == 'foreman_model' ]]; then
  new_name='hardware_model'
else
  new_name=${prefixless_name}
fi

That outlined, we have to even have a ${old_name}. Properly, that is a for loop over the modules, proper?

for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)completed

Whereas we’re looping over recordsdata, let’s rename them and all of the recordsdata which are related to the module:

# rename the module
git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py

# rename the checks and check fixtures
git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
git mv checks/fixtures/apidoc/${old_name}.json checks/fixtures/apidoc/${new_name}.json
for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
  git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
completed

Now comes the actually difficult half: search and exchange. Let’s have a look at the place we have to exchange first:

  1. within the module file
    1. module key of the DOCUMENTATION stanza (e.g. module: foreman_example)
    2. all examples (e.g. foreman_example: …)
  2. in all check playbooks (e.g. foreman_example: …)
  3. in pytest’s conftest.py and different recordsdata associated to check execution
  4. in documentation
sed -E -i "/^(s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py

sed -E -i "/^(s+${old_name}|module):/ s/${old_name}/${new_name}/g" checks/test_playbooks/duties/*.yml checks/test_playbooks/*.yml

sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" checks/conftest.py checks/test_crud.py

sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md

You’ve got most likely observed I used ${BASE} and ${TESTS} and by no means outlined them… Lazy me.

However right here is the total script, defining the variables and looping over all of the modules.

#!/bin/bash

BASE=plugins/modules
TESTS=checks/test_playbooks
RUNTIME=meta/runtime.yml

echo "plugin_routing:" > ${RUNTIME}
echo "  modules:" >> ${RUNTIME}

for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)
  prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
  if [[ ${old_name} == 'foreman_environment' ]]; then
    new_name='puppet_environment'
  elif [[ ${old_name} == 'katello_sync' ]]; then
    new_name='repository_sync'
  elif [[ ${old_name} == 'katello_upload' ]]; then
    new_name='content_upload'
  elif [[ ${old_name} == 'foreman_ptable' ]]; then
    new_name='partition_table'
  elif [[ ${old_name} == 'foreman_search_facts' ]]; then
    new_name='resource_info'
  elif [[ ${old_name} == 'katello_manifest' ]]; then
    new_name='subscription_manifest'
  elif [[ ${old_name} == 'foreman_model' ]]; then
    new_name='hardware_model'
  else
    new_name=${prefixless_name}
  fi

  echo "renaming ${old_name} to ${new_name}"

  git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py

  git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
  git mv checks/fixtures/apidoc/${old_name}.json checks/fixtures/apidoc/${new_name}.json
  for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
    git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
  completed

  sed -E -i "/^(s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py

  sed -E -i "/^(s+${old_name}|module):/ s/${old_name}/${new_name}/g" checks/test_playbooks/duties/*.yml checks/test_playbooks/*.yml

  sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" checks/conftest.py checks/test_crud.py

  sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md

  echo "    ${old_name}:" >> ${RUNTIME}
  echo "      redirect: ${new_name}" >> ${RUNTIME}

  git commit -m "rename ${old_name} to ${new_name}" ${BASE} checks/ README.md docs/ ${RUNTIME}
completed

As a bonus, the script may even generate a meta/runtime.yml which can be utilized by Ansible 2.10+ to robotically use the brand new module names if the playbook incorporates the outdated ones.

Oh, and sure, that is most likely not the nicest script you may learn this yr. Possibly not even right this moment. Nevertheless it obtained the job properly completed and I do not intend to want it once more anyhow.

ansibleenglishlinuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/06/mass-migrating-modules-inside-an-ansible-collection/Mon, 22 Jun 2020 19:31:05 GMTbare pings 2020https://www.die-welt.internet/2020/06/naked-pings-2020/evgeni

ajax’ publish about “ping” etiquette is over 10 years outdated, however holds true till at the present time. So true, that my IRC consumer at work has a script, that may reply with a hyperlink to it every time I get a unadorned ping.

However IRC just isn’t the one technique of communication. There may be additionally mail, (video) conferencing, and GitHub/GitLab. Properly, a minimum of within the software program engineering context. Oh and sure, it is 2020 and I nonetheless (proudly) haven’t any Slack account.

Fortunately, (bare) pings usually are not actually a factor for mail or conferencing, however I see an growing quantity of them on GitHub and it bothers me, lots. As there isn’t a direct messaging on GitHub, you may rightfully ask why, as there may be all the time context in type of the difficulty or PR the ping occurred in, so lean again an hear 😉

notifications develop into ineffective

Whereas there is likely to be context within the situation/PR, there may be none (in addition to the title) within the notification mail, and never even the title within the notification from the Android app (which I’ve put in as I take advantage of it lot for smaller critiques). So the ping will all the time power a full context swap to open the net view of the difficulty in query, eradicating the likelihood to simply swipe away the notification/mail as “not vital proper now”.

even some context just isn’t sufficient context

Even after visiting the difficulty/PR, the ping very often stays non-actionable. Would you like me to debug/repair the difficulty? Evaluation the PR? Merge it? Shut it? I do not know!

The one actionable ping is when the earlier message is directed at me and has an actionable request in it and the ping is only a reminder that I’ve to do it. And even then, why not write “hey @evgeni, did you could have time to course of my final query?” or one thing related?

BTW, that is additionally what I dislike about ajax’ minimal instance “ping re bz 534027” – what am I imagined to do with that BZ?!

why me anyhow?!

Until I’m the one maintainer of a repo or the creator of the difficulty/PR, there may be normally no cause to ping me immediately. I is likely to be sick, or on vacation, or presently not engaged on that exact repo/subject or no matter. Any of that may lead to you considering that your request might be prioritized, whereas in actuality it will not. Even worse, someone may come throughout it, see me talked about and suppose “okay, that is Evgeni’s playground, I will look elsewhere”.

Most organizations have teams of individuals engaged on particular subjects. If you already know the group identify and have sufficient permissions (I’m not precisely certain which, simply that GitHub have limits to keep away from spam, sorry) you’ll be able to ping @group/group and everybody in that group will get a notification. That is removed from excellent, however a minimum of this may get the eye of the best individuals. Generally there may be additionally a bot that may both robotically ping a bunch of individuals or you’ll be able to set off to take action.

Oh, and I am getting paid for work on open supply. So if you find yourself pinging me in a work-related repository, there’s a excessive likelihood I’ll solely course of that in work hours, whereas one other co-worker might need been out there that can assist you out nearly instantly.

be affected person

Until we talked on one other medium earlier than and I’m ready for it, please do not ping immediately after creation of the difficulty/PR. Maintainers get notifications about new stuff and can triage and course of it in some unspecified time in the future.

conclusion

In the event you really feel known as out, please do not take it personally. As a substitute, please attempt to present as a lot actionable data as doable and be affected person, that is one of the simplest ways to get a top quality outcome.

I’ll ignore pings the place I do not instantly know what to do, and so must you.

yet another factor

Oh, and should you ping me on IRC, with context, after which disconnect earlier than I can reply…

Up to now you’ll typically get a reply by mail. As of late the request might be likely ignored. I do not like speaking to the void. Sorry.

englishlinuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/06/naked-pings-2020/Solar, 14 Jun 2020 13:59:28 GMTpresent your deskhttps://www.die-welt.internet/2020/06/show-your-desk/evgeni

Some days in the past I posted an image of my desk on Mastodon and Twitter.

standing desk with a monitor, laptop etc

After that I obtained a number of questions concerning the setup, so I believed “Michael and Michael did posts about their setups, you can too!”

And properly, right here we’re 😉

desk

The desk is a Flexispot E5B body with a 200×80×2.6cm oak desk prime.

The Flexispot E5 (the B stands for black) is a quite low cost (as in not costly) standing desk body. It has a retail worth of 379€, however you’ll be able to typically get it as little as 299€ on sale.

Add a pleasant desk prime from an area retailer (mine was like 99€), a little bit of wooden oil and work and also you get a pleasant standing desk for lower than 500€.

The body has three reminiscence positions, however I solely use two: one for sitting, one for standing, and a “change place” timer that I by no means used thus far.

The desk prime has a little bit of a swing when in standing place (mine is at 104cm in line with the electronics within the desk), however not sufficient to disturb typing on the keyboard or considering. I actually would not place a stitching machine up there, however that was not a requirement anyhow 😉

To match: the IKEA Bekant desk has the same, possibly even barely stronger swing.

chair

Talking of IKEA… The chair is an IKEA Volmar. They do not appear to promote it since mid 2019 anymore although, so no hyperlink right here.

{hardware}

laptop computer

A Lenovo ThinkPad T480s, i7-8650U, 24GB RAM, operating Fedora 32 Workstation. Simply sufficient energy whereas not too massive and heavy. Filled with stickers, as a result of I ♥ stickers!

It is linked to a Lenovo ThinkPad Thunderbolt Three Dock (Gen 1). After 2 years with that factor, I am nonetheless unsure what to consider it, as I had numerous points with it over the time:

  • the inner USB hub simply vanishing from existence till a full energy cycle of the dock was carried out, however which may have been attributable to my USB-switch which I lately eliminated.
  • the NIC negotiating at 100MBit/s as a substitute of 1000MBit/s after which holding on re-negotiating each couple of minutes, disconnecting me from the community, however I’ve not seen that for the reason that Fedora 32 improve.
  • the USB-attached keyboard not working throughout boot because it wants some Thunderbolt magic.

The ThinkPad stands on a Adam Corridor Stands SLT001E, a quite easy stand for laptops and different gear (primarily made for DJs I believe). The Dock suits precisely between the 2 toes of the stand, so that’s good and saves house on the desk. Utilizing the stand I can use the laptop computer display as a second display once I need it – however most frequently I don’t and have the laptop computer lid closed whereas working.

workstation

A Lenovo ThinkStation P410, Xeon E5-2620 v4, 96GB RAM, operating Fedora 32 Workstation. That is my VM playground. Having plenty of RAM actually helps should you want/need to run many VMs with Foreman/Katello or Pink Hat Satellite tv for pc as they are typically a bit reminiscence hungry and throwing {hardware} at issues are typically a simple resolution for a lot of of them.

The ThinkStation can be linked to the monitor, and I used to have an USB swap to flip my keyboard, mouse and Yubikey from the laptop computer to the workstation and again. However as famous above, this swap one way or the other made the USB hub within the laptop computer dock sad (possibly as a result of I used to be switching too rapidly after resume or so), so it is presently faraway from the setup and I take advantage of the workstation through SSH solely.

It is mounted underneath the desk utilizing a ROLINE PC holder. You will not get any design awards with it, however it’s straightforward to assemble and permits the pc to maneuver with the desk, minimizing the variety of cables that have to have a versatile size.

monitor

The monitor is an older Dell UltraSharp U2515H – a 25″ 2560×1440 mannequin. It sits on an Amazon Fundamentals Monitor Arm (which is an identical to an Ergotron LX to the perfect of my information) and is accompanied by a Dell AC511 soundbar.

I do not use the adjustable arm a lot. It is from the time I had no actual standing desk and would use the arm and a cardboard field to elevate the monitor and keyboard to a standing degree. In the event you do not need to spend money on a standing desk, that is the perfect and least expensive resolution!

The soundbar is adequate for listening to music whereas working and for chatting with colleagues.

webcam

A Logitech C920 Professional, what else?

Works completely underneath Linux with the UVC driver and has quite good microphones. Really, so good that I by no means use a headset throughout video calls and thus far no one complained about unhealthy audio.

keyboard

A ThinkPad Compact USB Keyboard with TrackPoint. The keyboard matches the one in my T480s, so my mind would not have to modify. It was terrible once I nonetheless had the “outdated” mannequin and needed to swap between the 2.

UK format. Sue me. I like the massive return key.

mouse

A Logitech MX Grasp 2.

I obtained the MX Revolution as a present a very long time in the past, and at first I used to be like: WTF, why would anybody pay hundred bucks for a mouse?! Properly, after a while I knew, it is simply that good. And when it was time to get a brand new one (the rubber coating will get all slippery after a while) the choice was quite straightforward.

I am pondering if I ought to strive the MX Ergo or the MX Vertical in some unspecified time in the future, however not sufficient to go and purchase certainly one of them but.

different

notepad

I am horrible at remembering issues, so I would like to write down them down. And I am horrible at remembering to take a look at my notes, so that they should be for my part. So there’s a common A5 notepad on my desk, that will get stuffed with verify packing containers and stuff, web page after web page.

coaster

It is a picket desk, you do not need to have liquids on it, proper? Fortunately a buddy of mine as soon as made coasters out of outdated Xeon CPUs and epoxy. He gave me one in alternate for a busted X41 ThinkPad. I nonetheless suppose I made the higher deal 😉

yubikey

Preserve your secrets and techniques protected! Mine is used as a GnuPG sensible card for each encryption and SSH authentication, U2F on numerous pages and 2FA for VPN.

headphones

I personal a pair of Bose QuietComfort 25 with an aftermarket Bluetooth adapter and Anker SoundBuds Slim+. Each are used quite seldomly whereas working, as my workplace is normally quiet and nobody is disturbed once I hearken to music with out headphones.

what’s lacking?

mild

I need to add extra mild to the setup, noth to have a greater image throughout video calls but in addition to have higher mild when doing one thing else on the desk – like soldering. The plan is so as to add an IKEA Tertial with some Trådfri sensible LED in it, however the Tertial is presently not out there for supply at IKEA and I am not going to go to one within the present scenario.

larger monitor

At present pondering getting an even bigger (27+ inch) 4K monitor. Nonetheless cannot actually determine which one to get. There are such a lot of, they usually all differ in a roundabout way. Nevertheless it appears no inexpensive one is providing an built-in USB swap and adequate quantity of USB ports, so I will most likely get no matter can get me an excellent image with none further options at an inexpensive worth.

Altering the monitor will most likely additionally imply rethinking the sound output, as I am certain mounting the Dell soundbar to something however the designated 5 yr outdated monitor will not work too properly.

english{hardware}linuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/06/show-your-desk/Wed, 10 Jun 2020 14:30:07 GMTConstructing a Shelly 2.5 USB to TTL adapter cablehttps://www.die-welt.internet/2020/05/building-a-shelly-25-usb-to-ttl-adapter-cable/evgeni

While you need to flash your Shelly 2.5 with something however the unique firmware for the primary time, you may want to connect it to your pc. Later flashes can occur over the air (a minimum of with ESPHome or Tasmota), however the first one can not.

In concept, this isn’t an issue because the Shelly has a fairly uncovered and properly documented interface:

Shelly 2.5 pinout

Nonetheless, on nearer inspection you may discover that your regular jumper wires do not match because the Shelly has a connector with 1.27mm (0.05in) pitch and 1mm diameter holes.

Now, there are numerous tutorials on the Web tips on how to construct a appropriate connector utilizing Ethernet cables and sizzling glue or with feminine header socket legs, and you may even purchase cables on Amazon for 18€! However 18€ seemed like lots and the feminine header socket factor whereas working was fairly finicky to make use of, so I made a decision to construct one thing completely different.

We’ll want 6 female-to-female jumper wires and a 1.27mm pitch male header. Jumper wires I had at house, the header I obtained is a SL 1X20G 1,27 from reichelt.de for 0.61€. It is a 20 pin one, so we are able to make Three adapters out of it if wanted. Oh and we’ll want some isolation tape.

SL 1X20G 1,27

Step one is to chop the header into 6 pin chunks. Make certain to not minimize too near the sixth pin as the entire thing is quite fragile and also you may lose it.

SL 1X20G 1,27 cut into pieces

It now suits very properly into the Shelly with the longer facet of the pins.

Shelly 2.5 with pin headers attached

Second step is to strip the plastic a part of one facet of the jumper wires. These are designed to suit 2.54mm pitch headers and will not work for our use case in any other case.

jumper wire with removed plastic

Because the connectors are nonetheless too massive, even after eradicating the plastic, the following step is to take some pliers and gently press the connectors till they match the smaller pins of our header.

Shelly 2.5 with pin headers and a jumper wire attached

Now’s the time to place all the pieces collectively. To keep away from brief circuiting the pins/connectors, apply some isolation tape whereas assembling, however not an excessive amount of because the house is absolutely restricted.

Shelly 2.5 with pin headers and a jumper wire attached and taped

And we’re completed, an exquisite (lol) and dealing (yay) Shelly 2.5 cable that may be hooked up to any USB-TTL adapter, just like the pictured FTDI clone you get nearly in all places.

Shelly 2.5 with full cable and FTDI attached

Sure, in a super world we’d have soldered the header to the cable, however I did not really feel like soldering on that restricted house. And sure, shrink-wrap is likely to be an excellent factor too, however once more, restricted house and with isolation tape you solely want one layer between two pins, not two.

english{hardware}home-automationplanet-debianhttps://www.die-welt.internet/2020/05/building-a-shelly-25-usb-to-ttl-adapter-cable/Tue, 12 Might 2020 10:44:35 GMTDistant administration for OpenWRT units with out opening inbound connectionshttps://www.die-welt.internet/2020/05/remote-management-for-openwrt-devices-without-opening-inbound-connections/evgeni

Everyone seems to be working from house nowadays and wishes a good Web connection. That is very true if it’s worthwhile to do video calls and the room you need to do them has the worst WiFi protection of the entire flat. Properly, that is precisely what occurred to my dad and mom in regulation.

After they moved in, we knew that in some unspecified time in the future we’ll have to repair the WiFi – the ISP offered DSL/router/WiFi combo wouldn’t minimize it, particularly not with the form of the flat and the elevator shaft in the midst of it: the flat is basically a giant C round stated shaft. Nevertheless it was adequate for e-mail, so we postponed that. Till now.

The flat has wired Ethernet, however the customers MacBook Air doesn’t. That may have been too straightforward, proper? So let’s add one other entry level and hope the scenario improves.

Fortunately I nonetheless had a TP-Hyperlink Archer C7 AC1750 in a drawer, which I might rapidly flash with a recent OpenWRT launch, disable DHCPd and configure the identical SSID and keys as the principle/outdated WiFi. However I did not know which channels can be greatest within the vacation spot atmosphere.

Beneath regular circumstances, I’d simply take the AP, drive to my dad and mom in regulation and end the configuration there. Nope, not gonna occur nowadays. So my plan was to complete configuration right here, put the AP in a field and on the porch the place somebody can choose it up.

However this would go away me with out a technique to additional configure the system as soon as it has been deployed – I used to be not significantly thinking about making an attempt to get port forwarding configured through cellphone and I used to be fairly certain UPnP was disabled within the ISP router. Putting in a Tor hidden service for SSH was one chance, organising a VPN and making the AP a consumer one other. Properly, or simply making a reverse tunnel with SSH!

sshtunnel

Making a tunnel with OpenSSH is simple: ssh -R127.0.0.1:2222:127.0.0.1:22 server.instance.com will ahead localhost:2222 on server.instance.com to port 22 of the machine the SSH connection originated from. However what occurs if the connection dies? Including a whereas true; do …; completed round it’d assist, however I would love to not reinvent the wheel right here!

Fortunately, someone already invented that exact wheel and OpenWRT comes with a sshtunnel bundle that takes care of organising and maintaining such tunnels and documentation how to take action. Simply set up the sshtunnel bundle, edit /and many others/config/sshtunnel to include a server stanza with hostname, port and username and a tunnelR stanza referring stated server plus the native and distant sides of the tunnel and also you’re good to go.

config server house
  possibility person     person
  possibility hostname server.instance.com
  possibility port     22

config tunnelR local_ssh
  possibility server         house
  possibility remoteaddress  127.0.0.1
  possibility remoteport     2222
  possibility localaddress   127.0.0.1
  possibility localport      22

The one caveat is that sshtunnel wants the OpenSSH consumer binary (and the bundle accurately is dependent upon it) and OpenWRT doesn’t ship the ssh-keygen device from OpenSSH however solely the equal for Dropbear. As OpenSSH cannot learn Dropbear keys (and vice versa) you may need to generate the important thing elsewhere and deploy it to the OpenWRT field and the goal system.

Oh, and OpenWRT defaults to enabling password login through SSH, so please disable that should you expose the field to the Web in any manner!

Utilizing the tunnel

After configuring and beginning the service, you may see the OpenWRT system logging in to the configured distant and opening the tunnel. For some cause that connection wouldn’t present up within the output of w — most likely as a result of there was no shell began or one thing, however logs present it clearly.

Now it is only a matter of connecting to the newly open port and also you’re in. Because the port is certain to 127.0.0.1, the connection is simply doable from server.instance.com or utilizing it as a soar host through OpenSSH’s ProxyJump possibility: ssh -J server.instance.com -p 2222 [email protected].

Moreover, you’ll be able to ahead an area port over the tunneled connection to create a tunnel for the OpenWRT webinterface: ssh -J server.instance.com -p 2222 -L8080:localhost:80 [email protected]. Sure, that is a tunnel inside a tunnel, and all of the community engineers will go brrr, however it works and you may entry LuCi on http://localhost:8080 simply high quality.

In the event you do not need to sort that each time, create an entry in your .ssh/config:

Host openwrt
  ProxyJump server.instance.com
  HostName localhost
  Port 2222
  Consumer root
  LocalForward 8080 localhost:80

And we’re completed. Get pleasure from easy accessibility to the newly deployed field and stick with it.

english{hardware}linuxplanet-debiansoftware programhttps://www.die-welt.internet/2020/05/remote-management-for-openwrt-devices-without-opening-inbound-connections/Solar, 03 Might 2020 09:26:39 GMT

Leave a Reply

Your email address will not be published. Required fields are marked *