stanislavv
лытдыбр работа Kubernetes Потребовалось синхронизировать секреты между предыдущим сервисом (чистый докер с самодельной обвязкой для типа-кластеризованности) и k8s. Ладно, взял external-secrets, настроил там синхронизацию через webhook и... наступил на грабли — часть секретов plain-text, а надо структурированное.
Ок, написал хреновину, преобразующую этот набор строк в структуру в другом секрете.
А теперь пришли товарисчи и говорят: а сделай уже синхронизацию так, всё равно половина уже есть.
Похоже, на следующей неделе буду таки переделывать нафиг. Хорошо ещё, сервисов не успел попереносить...
Ilya-S-Zharskiy
K8s Kubernetes развернуть_кластер_без_интернета github.com
Air-Gap installation

Offline environment

In case your servers don't have access to the internet directly (for example when deploying on premises with security constraints), you need to get the following artifacts in advance from another environment where has access to the internet.

Some static files (zips and binaries)
OS packages (rpm/deb files)
Container images used by Kubespray. Exhaustive list depends on your setup
[Optional] Python packages used by Kubespray (only required if your OS doesn't provide all python packages/versions listed in requirements.txt)
[Optional] Helm chart files (only required if helm_enabled=true)

Then you need to setup the following services on your offline environment:

an HTTP reverse proxy/cache/mirror to serve some static files (zips and binaries)
an internal Yum/Deb repository for OS packages
an internal container image registry that need to be populated with all container images used by Kubespray
[Optional] an internal PyPi server for python packages used by Kubespray
[Optional] an internal Helm registry for Helm chart files

You can get artifact lists with generate_list.sh script. In addition, you can find some tools for offline deployment under contrib/offline.
Configure Inventory

Once all artifacts are accessible from your internal network, adjust the following variables in your inventory to match your environment:

For the OS specific settings, just define the one matching your OS. If you use the settings like the one above, you'll need to define in your inventory the following variables:

registry_host: Container image registry. If you don't use the same repository path for the container images that the ones defined in Download's role defaults , you need to override the *_image_repo for these container images. If you want to make your life easier, use the same repository path, you won't have to override anything else.
registry_addr: Container image registry, but only have [domain or ip]:[port].
files_repo: HTTP webserver or reverse proxy that is able to serve the files listed above. Path is not important, you can store them anywhere as long as it's accessible by kubespray. It's recommended to use *_version in the path so that you don't need to modify this setting everytime kubespray upgrades one of these components.
yum_repo/debian_repo/ubuntu_repo: OS package repository depending on your OS, should point to your internal repository. Adjust the path accordingly.
Ilya-S-Zharskiy
K8s решето Kubernetes кубер школотроны blog.aquasec.com

Most clusters were tied to small- to medium-sized organizations, but a notable subset was connected to large conglomerates and Fortune 500 companies, Aqua Security said. The exposures were a result of two misconfigurations: one that allows anonymous access with privileges and another that exposes Kubernetes clusters to the internet.

Over a three-month period, the researchers identified 350+ API servers which could be exploited by attackers, they wrote. Upon analyzing the newly discovered hosts, the team found that 72% had ports 443 and 6443 exposed (these are the default HTTPS ports). They also found that 19% of the hosts used HTTP ports such as 8001 and 8080, while the rest used less common ports (e.g., 9999).

The second issue is a misconfiguration of the `kubectl` proxy with flags that unknowingly expose the Kubernetes cluster to the internet, the researchers said. Impacted hosts included organizations across a variety of sectors such as financial services, aerospace, automotive, industrial, and security.

"When you run the same command with the following flags '–address=`0.0.0.0` –accept-hosts `.*`', the proxy on your workstation will now listen and forward authorized and authenticated requests to the API server from any host that has HTTP access to the workstation. Mind, that the privileges are the same privileges that the user who ran the 'kubectl proxy' command has."
Ilya-S-Zharskiy
code K8s container.training презентация слайды учёба Kubernetes
https://github.com/jpetazzo/container.training

на примере приложения dockercoins.yml показан запуск микросервиса в кубернетес, мониторинг, логирование, обновление, настройки сети и т.д.

есть все настройки для локального запуска, в т.ч. виртуалки и установка кубера
(используется Vagrant)
https://github.com/jpetazzo/container.training/tree/main/prepare-local

Есть записи предыдущих мероприятий (т.е. озвучка слайдов), разбитые на короткие главы с осмысленными названиями
https://www.youtube.com/playlist?list=PLBAFXs0YjviJwCoxSUkUPhsSxDJzpZbJd

https://i.imgur.com/vcIzAFX.png


Часть информации устарела (2018..2019), но не обязательно читать всё подряд (всего там сейчас 2320 слайдов)

можно скачать архив, открыть kube-selfpaced.yml.html и смотреть локально 

аффтар  
Jérôme Petazzoni @jpetazzo

Ilya-S-Zharskiy
SRE Disaster_Recovery_Plan devops страх_и_ненависть_в_облаках Kubernetes yaml meme-arsenal.com

У нас было 2 yaml-погромиста 75 тестовых приложений, 5 продуктовых кластеров, полудохлый ArgoCD и целое множество пайплайнов всех годов и окружений, плэйбуки, а также TFS (Azure DevOps), SLA, conf-call-ы, обучение по интеграционной шине и 2 дюжины не-lint-ующихся конфигов. Не то чтобы это был необходимый запас для деплоя, но если начал собирать устойчивую инфраструктуру, становится трудно остановиться. Единственное, что вызывало у меня опасение — это DRP. Ничто в мире не бывает более беспомощным, безответственным и порочным, чем kernel panic. Я знал, что рано или поздно мы перейдем и на эту дрянь.

Ilya-S-Zharskiy
docker K8s privacy TOR OnionService Kubernetes github.com

tor-controller allows you to create OnionService resources in kubernetes. These services are used similarly to standard kubernetes services, but they only serve traffic on the tor network (available on .onion addresses).

tor-controller creates the following resources for each OnionService:

a service, which is used to send traffic to application pods
tor pod, which contains a tor daemon to serve incoming traffic from the tor network, and a management process that watches the kubernetes API and generates tor config, signaling the tor daemon when it changes
rbac rules


camo.githubusercontent.com

Ilya-S-Zharskiy
Bashsible hiload Kubernetes по будням занимаюсь в вечерней школе слёрма (понедельник и вторник — теория, остальные дни домашка)

по выходным планирую начать изучать энсибл (ни хрена его не знаю, как оказалось, при том, что реально нужен)

уже взял винишка

при мне бомжик купил пачку сиг за сотку

и он был в костюме и дюблёнке

а я — в трениках, джемпере в катышках из юникло и куртке за 700 руб. из Ашана
Ilya-S-Zharskiy
K8s Kubernetes *AWS *EKS *наконец-то!
aws.amazon.com

Amazon EKS now supports Kubernetes version 1.15

Posted On: Mar 10, 2020

Amazon Elastic Kubernetes Service (EKS) now supports Kubernetes version 1.15 for all clusters.



A note on Kubernetes version 1.12 deprecation:

Amazon EKS support mirrors the Kubernetes community by providing full support for the 3 most recent releases. Kubernetes 1.12, 1.13, 1.14, and 1.15 are all fully supported today, and new clusters can be started using any of these releases. However, given the Kubernetes quarterly release cycle, it is critical for all customers to have an ongoing upgrade plan.

As of today, Kubernetes version 1.12 is deprecated in EKS, and will no longer be supported on May 11th, 2020. On this day, you will no longer be able to create new 1.12 clusters and all EKS clusters running Kubernetes version 1.12 will be updated to the latest available platform version of Kubernetes version 1.13.

We recommend customers upgrade existing 1.12 or 1.13 clusters and worker nodes to at least 1.14 as soon as practical. Learn more about the EKS version lifecycle policies in the documentation.

docs.aws.amazon.com
Ilya-S-Zharskiy
docker Linux coreos Kubernetes jenkinsX Linux сегодня — это платформа для запуска докеров

infoworld.com

Docker (Moby) — универсальный формат распространения ПО
он заменяет собой RPM/DEB

хотите научиться завтрашнему линупсу — поставьте CoreOS
youtube.com
docs.fedoraproject.org

и запустите systemd helm chart

выкатывание релизов — это просто запуск chaos monkey
произвольно перезагружающих всё подряд в вашем jx
docs.openshift.com

youtube.com
youtube.com
youtu.be
Ilya-S-Zharskiy
docker оркестрация 5-less-than-K8s полуКубер k3s Kubernetes 10 дней назад зарелизили "однёрку" кубернтриэса
github.com

K3s has reached v1.0.0. This release denotes a focus on stability and quality moving forward, while still delivering new and useful features.

K3s is a fully conformant Kubernetes distribution that focuses on presenting a small footprint and ease of operation. K3s packages the entire Kubernetes stack into a single binary, including:

Kubernetes master components: kube-apiserver, kube-scheduler, kube-controller-manager, and cloud-controller-manager
Kubernetes node components: kubelet and kube-proxy
containerd as the container runtime
CoreDNS for cluster DNS
Flannel for cluster networking
CLI utilities including kubectl, crictl, and ctr
An embedded SQLite database that replaces etcd
Ilya-S-Zharskiy
Tekton Jenkins_X Golang K8s Knative Gloo jenkins Kubernetes jx Jenkins-X minikube youtu.be

jenkins-x.io

Дженкинс здорового человека, наконец-то без жабы,
на православной гошечке
youtu.be

jenkins.io


Уже с марта
jenkins.io
а мужики-то не знают!
jenkins.io
jenkins-x.io
Jenkins X requires a Kubernetes cluster to exist so that it can be installed via jx boot.

There are a number of approaches for creating Kubernetes clusters.

Our recommended a approach is to use Terraform terraform.io
to setup all of your Cloud Infrastructure (kubernetes cluster, service accounts, storage buckets, logging etc) and to use a cloud provider to create and manage your kubernetes clusters.

Or you can use a kubernetes provider specific approach:
Amazon
jenkins-x.io
How to create a kubernetes cluster on Amazon (AWS)
Azure

How to create a kubernetes cluster on Azure
Google
jenkins-x.io
How to create a kubernetes cluster on Google Cloud (GCP)


jenkins-x.io
youtube.com
youtube.com

youtu.be

youtu.be

youtu.be