1
0
Fork 0
mirror of https://gitlab.com/netravnen/NetworkLabNotes.git synced 2024-12-26 21:07:55 +00:00

Compare commits

..

9 commits

4 changed files with 136 additions and 42 deletions

View file

@ -369,3 +369,34 @@ Problems by running \textit{Full Mesh} is the formula of \[ iBGPsessions = n*(n-
\item distance is set to 20 compared to 200 for \gls{ibgp} routes,
\item Next hop does \textit{not} change for \gls{ebgp} routes advertised to \gls{ibgp} neighbours \textit{by-default}\footnote{Often times it is necessary to tell a router to set itself as the next-hop before advertising to \gls{ibgp} neighbours}.
\end{enumerate}
\subsection[bgpzombies]{Border Gateway Protocol Zombies}
\gls{bgp} zombies\cite{ietf-idr-bgp-sendholdtimer-00} can occuer for a multitude of reasons. Depending on the implementation. Examples are
\begin{enumerate}
\item Overloaded control plane
\item Unable to send out update/keepalives due to full out queues
\item Stuck TCP session the \gls{bgp} daemon is unaware of (e.g. tcp window size changed to 0)
\end{enumerate}
The consequence of \gls{bgp} sessions not being able to close properly. Can sometimes result in zombie routes. Where the router originating the route. Due to having one or more stuck sessions. Are unable to send out WITHDRAW messages. Thereby other routers think the route is still active. And does not withdraw the route from their own \gls{rib}. Ending up with a \gls{rib} containing STALL routes.
One workaround to get rid of zombie routes is to completely reset your routers \gls{rib}. This can be done by example rebooting network edge routers\cite{Navigati54:online}.
As of writing (Nov 2023) the following known public implementations have implemented the draft,
\begin{enumerate}
\item FRRouting\cite{bgpdimpl26:online}
\item neo-bgp\cite{Whatdoes40:online} (bgp.tools)
\item OpenBGPD\cite{Rebgpdse40:online}
\end{enumerate}
As of writing (Nov 2023) the following known public implementations are working on implementing the draft,
\begin{enumerate}
\item BIRD \url{https://gitlab.nic.cz/labs/bird/}\\
branch BGP_SendHoldTimer
\end{enumerate}
It is unknown when commercial vendors will implement the current internet draft. This will most likely not happen until the draft has been adopted as an official RFC.

View file

@ -7,33 +7,69 @@
\section{Kernel Upgrades}
\begin{txt}
# LIST KERNELS ON /boot PARTITION
LIST KERNELS ON /boot PARTITION
\begin{txt}
dpkg --list | grep linux-image
dpkg --list | grep linux-headers
\end{txt}
\begin{txt}
# REMOVE SELECTED KERNEL VERSIONS FROM BOOT PARTITION
REMOVE SELECTED KERNEL VERSIONS FROM BOOT PARTITION
\begin{txt}
sudo apt-get purge linux-image-4.4.0-{75,78,79}
sudo apt-get purge linux-image-extra-4.4.0-{75,78,79}
sudo apt-get purge linux-headers-4.4.0-{75,78,79}
\end{txt}
or
or alternatively
\begin{txt}
sudo apt autoremove [-f]
\end{txt}
\begin{txt}
# My one-liner to remove old kernels (this also frees up disk space)
# https://askubuntu.com/a/254585
My one-liner to remove old kernels (this also frees up disk space). https://askubuntu.com/a/254585
\begin{txt}
dpkg --list | grep linux-image | awk '{ print \$2 }' | sort -V | sed -n '/'`uname -r`'/q;p' | xargs sudo apt-get -y purge
\end{txt}
Remember to update grub2 configuration
\begin{txt}
# Remember to update grub2 configuration
sudo update-grub2
\end{txt}
\newpage
\subsection{Proxmox}
\subsubsection{Proxmox Migrations}
Move a LXC containers storage volumes to a different storage backend, both the boot disk, and additional disks. 1400 is here the example Container ID. And ''tank'' the target storage backend. We need to stop the container before we are allowed to migrate the storage volumes of the container. We start the container back up after finishing migrating the storage volumes.
\begin{txt}
sudo pct stop 1400 && \
sudo pct move-volume 1400 rootfs tank --delete && \
sudo pct move-volume 1400 mp0 tank --delete && \
sudo pct start 1400
\end{txt}
Using Remote Migrate to migrate an LXC container to a different Proxmox Node in another Proxmox Cluster. This is an offline migration, where we turn off the Container when migration. And restarting it with the new bridge setting afterwards. If the IPs have changed. This needs to be updated manually.
\begin{txt}
sudo pct remote-migrate \
$(
sudo pct list |
grep <LOOK FOR A SPECIFIC HOSTNAME> |
grep --perl-regex --only-matching '^\d+'
) \
<TARGET CONTAINER/VM ID> \
'apitoken=PVEAPIToken=<USER>@<METHOD>!<TOKEN NAME>=<TOKEN KEY>,host=<TARGET HOSTNAME OR IP>' \
--delete 1 \
--online 0 \
--restart 1 \
--target-bridge <TARGET BRIDGE NAME> \
--target-storage <TARGET STORAGE NAME>
\end{txt}

26
chapter/pihole.tex Normal file
View file

@ -0,0 +1,26 @@
% !TeX TS-program =
% !TeX spellcheck = en_DK
% !TeX encoding = UTF-8
% !TeX root = ../main.tex
\chapter{PiHole}
\section{Whitelisting}
\subsection{Zoom Video Conferencing}
\begin{txt}
COMMENT='Zoom Video Conferencing';
pihole -w --comment "${COMMENT}" zoom.us --noreload && \
pihole -w --comment "${COMMENT}" app.zoom.us --noreload && \
pihole -w --comment "${COMMENT}" xmpp.zoom.us --noreload && \
pihole --white-regex --comment "${COMMENT}" '^zoom([\d\w]+)\.(cloud|\w{3})\.zoom\.us$' --noreload && \
pihole --white-regex --comment "${COMMENT}" '^\w{2}\d{1,4}\w{2}\d{1,4}\.zoom\.us$' --noreload && \
pihole --white-regex --comment "${COMMENT}" '^us\d{1,4}web\.zoom\.us$' --noreload && \
pihole --white-regex --comment "${COMMENT}" '^\w{1,4}\d{1,4}\.zoom\.us$' --noreload && \
pihole --white-regex --comment "${COMMENT}" '^\w{2}\d{1,4}\w{1,4}static\.zoom\.us$' --noreload && \
pihole --white-regex --comment "${COMMENT}" '\.cloud\.zoom\.us$' --noreload && \
pihole --white-regex --comment "${COMMENT}" '^\w+(\d{1,2})?\.\w{2}\.zoom\.us$' --noreload && \
pihole --white-regex --comment "${COMMENT}" '^\w{2}\d{1,3}images\.zoom\.us$'
\end{txt}

View file

@ -51,6 +51,7 @@
\include{chapter/voip}
\include{chapter/baseconf}
\include{chapter/linux}
\include{chapter/pihole}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %