10:00
Coherent optical transceivers - current capabilities and future possibilities
Thomas Weible, Gerhard Stein
Flexoptix
With the speed of 400G coherent technology was introduced to pluggable optical transceivers (OIF 400ZR and OpenZR+). This technology is complex and powerful for your network, it even has influence on your network device operating system.
10:30
What will DNS4EU bring to users and ISPs?
Robert Šefr
Whalebone
DNS4EU will protect 100 million users across Europe at the DNS level. We will present the benefits of the project for both users and ISPs.
10:50
Attacks on private networks in 2023 from the perspective of the Turris Sentinel network
Filip Hron
CZ.NIC
What can be learned from the data collected by probes in the Turris Sentinel network over the past calendar year, and how the tools used to view it differ.
11:10
Post-quantum transition: problems for popular protocols
Dmitry Belyavskiy
Red Hat
The presentation briefly explains several possible consequences of using PQ algorithms in network protocols.
11:30
Lunch
13:00
Welcome
13:10
Stairway to Anycast
Jan Žorž
6connect
Deployment experience building a global anycast network.
Legal boundaries of data collection for the purposes of criminal proceedings
Jan Kolouch
CESNET
In order to detect and investigate cybercrime, identify perpetrators and, where appropriate, seize the proceeds of such crime, it is essential that law enforcement agencies are able to obtain data from systems operated by Internet Service Providers. The Criminal Procedure Code contains specific mechanisms for obtaining such data.
Workshop - 60 min. (13:10 - 14:10)
13:30
News on DNS anycast for the national .CZ domain
Tomáš Hála
CZ.NIC
Where did DNS anycast for the .CZ domain spread to in 2023? How are things going with large DNS stack upgrades? Are KnotDNS and XDP worth using? How is the promotion of individual “letters” optimized? What other locations are we planning? What else is deployed using anycast besides the .CZ TLD and can you use it to run your domain?
13:50
Network simulation for testing
Alexander Zubkov
Qrator Labs CZ
We run an anycast network and use mostly BGP for internal routing. I will explain how we do simulations of our network to test new configurations or new software versions. And why we chose containerlab to do this.
14:10
NIX.CZ monitoring and statistics
Marian Rychtecký
NIX.CZ
The method of operation monitoring and handling and displaying statistics obtained by running the NIX.CZ infrastructure.
Sharing information on cyber threats
Jakub Onderka
NÚKIB
Sharing information about current cyber threats is an important aspect of prevention to avert or reduce the impact of cyber attacks within specific sectors.
14:30
Coffeebreak
15:00
iPerf3 or Measuring just the maximum throughput is nonsense, let’s do things better!
Zbyněk Kocur, Ondřej Vondrouš, Ondřej Votava
FEL ČVUT
In the lecture we will present advanced methods of network transmission capability diagnostics with respect to enhanced reporting and test setup options. After all, drive test measurements are quite different from fixed link measurements.
Rabbit holes of regulation
Jaromír Novák
NIX.CZ
The presentation will safely guide the audience through the regulatory rabbit hole and introduce them to the main regulatory innovations.
15:20
Meaningful measurement of DNS server capacity
Petr Špaček
ISC
An introduction to measuring the performance of DNS authoritative servers and resolvers. Open-source tools and their use for measuring DDoS scenarios and normal traffic.
Current challenges of regulation
Marek Ebert
ČTÚ
What are the current challenges the CTU faces in relation to changes in legislation and developments in the electronic communications market? What tools can the regulator use to facilitate the development of the high-speed internet networks?
15:40
Performance tests of the 400G DNS stack
Kryštof Šádek
CZ.NIC
Real-world performance testing of DNS servers.
The new Cybersecurity Act and the NCIS Portal
Tomáš Pekař
NÚKIB
Presentation on the new Act on Cybersecurity and the forthcoming information system, the NCIS Portal.
16:00
New features in Knot Resolver 6.x
Oto Šťáva
CZ.NIC
Major changes and new features in the upcoming version of Knot Resolver. The most significant ones are the switch to declarative configuration and the automated management of resolver processes. Another improvement is the completely revamped way of handling the rules that specify which record should respond to which client.
Migration to the common gov.cz domain
Michal Daněk
Úřad vlády ČR
16:20
Securing networks with Suricata (7)
Lukáš Šišmiš
CESNET
Presentation of Suricata - a high-performance and open-source network monitoring and threat detection engine.
16:40
Konec prvního dne konference
18:00
Social event
Time
Network Management
Academic Projects
09:30
Registration
10:00
Moving a data center during full operation
Tomáš Procházka
Seznam.cz
Moving 41 tons of hardware from DC Nagano in Prague to the new Seznam DC in Benátky nad Jizerou. How did we handle it from a network perspective, and what did Ansible help us "move out"?
vLab – virtualizing of communication infrastructure and more
Jaroslav Burčík
FEL ČVUT
The talk will cover the experience of building and operating a virtual laboratory at the Faculty of Electrical Engineering of CTU. vLab is a remotely accessible laboratory that enables to design and emulate complex runtime environments that work with real images of network elements as well as end stations or servers.
10:20
BIRD, MPLS and EVPN
Ondřej Zajíček
CZ.NIC
The potential of the BIRD routing daemon in MPLS and EVPN deployments.
Deploying and using a private academic cloud
Martin Kontšek
Žilinská univerzita
The lecture will introduce the deployment and use of a private cloud computing system operated at the Department of Information Networks, FRI UNIZA. We will also introduce the OpenStack Charms platform, which enables automation of CC OpenStack deployment and operation.
10:40
Modern network configuration with systemd-networkd
Ľubor Jurena
skHosting.eu
In Linux operating systems, systemd is responsible for managing networks using the systemd-networkd component. This talk will focus on introducing the new administration capabilities that systemd-networkd brings to the Linux operating system.
11:00
Linux: hardware switch support
Pavel Šimerda
Linux has long been used as an open source operating system for routing and servicing. Thanks to changes in recent years, it looks like we can handle hardware switching on Linux. So what are the things you can do today with Linux on machines with integrated switches?
11:20
Coffeebreak
11:40
Challenges of self-hosting services for network engineers
Tomáš Hlaváček
Overview of network debugging tools - current state (with usage examples in moderately entertaining stories) and few attached notes about latest contributions and desirable improvements.
12:00
Network tuning alternatives
Václav Nesvadba
Faster CZ
Saving IP consumption, easy automation, cheaper hardware, increased availability.
12:20
A look behind the scenes of the network for the RIPE meeting
Ondřej Caletka
RIPE NCC
How to run a network for several hundred participants on mostly open source software.
12:40
OpenBMP - what the heck was going on in my network (lightning talk)
Lubomír Prda
CESNET
This lightning talk shows one of the tools for storing and analyzing BMP data. If you ever heard the phrase "it was working fine yesterday", this tool may be just what you miss. Let me show you, what it can do.
12:50
Moving UA TLD from BIND to Knot (lightning talk)
Dmytro Kohmanyuk
Hostmaster.UA
13:00
Closing
13:10
Lunch
Photogallery
The CSNOG 2024 Meeting Report
The sixth meeting of the community of Czech and Slovak network administrators, CSNOG, took place on 23 and 24 February 2024. The CSNOG event is organized by CZ.NIC, NIX.CZ and CESNET. The program of this event is managed by the program committee.
Presentations and videos from this year's CSNOG are available on the event website under the program section.
CSNOG 2024 in numbers:
167 participants, mainly from the Czech and Slovakia
30 talks (divided into three tracks)
7 partners:
GOLD: Unimus
SILVER: Alef Nula, RIPE NCC, Seznam
COFFEE: Flexoptix
This summary was written by Petr Krčmář, who is a member of the Program Committee.
Meeting report
Thomas Weible, Gerhard Stein: Present and future of coherent optical transceivers
As you increase the frequency of light and with it the bandwidth, it becomes increasingly difficult to detect the signal. For example, chromatic dispersion occurs and cables have to be shorter and shorter. But light has other properties, so we have more detection options. In addition to amplitude, there is also phase and polarisation. We can use different polarizations of light and coherent transceivers are able to transmit the signal in a spatial arrangement. In fact, there is a combination of amplitude and phase and there is a decomposition of the signal into, for example, sixteen quadrants in 16QAM modulation, where the phase component denotes the real component and the amplitude denotes the imaginary component.
A concrete solution was presented in the DE-CIX peering centre, where Flexoptix transceivers deployed in a Nokia switch were used. The advantage of coherent transceivers is that they have a tunable laser frequency. They also have a standard diagnostic interface that allows to detect input voltage, signal quality or temperature. Temperature is critical, you should not overheat the transceivers and you must monitor. Each transceiver needs up to 20 watts, and when you need 32 of them in an element, it creates a large heat source on the modules alone.
In practical deployments it is necessary to keep an eye on compatibility, i.e. which transceivers are supported by the operating system in a given element. You really have to watch the specifications to make sure that everything works together. Today, 400G transceivers are commonly used, the world is gradually moving towards 800G and later on to 1.6T.
Robert Šefr: What will DNS4EU bring to users and ISPs?
DNS4EU focuses on the availability of DNS query resolution, but also on blocking responses that pose a threat to users. We are not able to stop all attacks with this, but we can stop about 94% of them at the DNS level. In the case of a phishing campaign, it is possible to block access to a malicious site stealing user data at the resolver.
DNS4EU was initiated by the European Commission and aims to offer resolvers in Europe while offering security. This is in line with the European strategy of self-sufficiency, so that we can use local resolvers in case of problems. At the same time, Europe is dealing with different security problems than America or Asia, so it is good to target protection to our European problems.
A consortium of organisations such as CZ.NIC, CVUT and others are involved in the project. One of the outputs of the project will be a DNS resolver available to the public, but another part is the availability of the resolver for telecom operators and government organizations. It will always be up to the user which kind of translation he/she will use. If they want a clean translation or if they want to block some services as well. Nobody is forcing anything on anybody. An important part of the project is anonymisation, where no user data is collected.
The ability to resolve DNS and block problematic traffic is of interest to the state because it can offer its own organizations access to protective resources. It is then very easy to involve even very small organisations, such as a small local authority that does not have the resources for comprehensive security protection. Different information can be gathered in the DNS resolver and significantly reduce the risk of, for example, phishing attacks on different networks.
The central component of DNS4EU is the Knot Resolver, which is developed at CZ.NIC. It does not yet support protection against DDoS attacks, but there are plans to add it. Resolver can then protect the rest of the Internet against various amplification attacks, but also restrict traffic to authoritative servers. But it should also protect itself, where it will work under very heavy load and can serve the most important queries.
We already have an infrastructure in place that handles resolver synchronization, monitoring, reporting, problem alerting, and threat data distribution. We're cutting backend support for Knot Resolver 6.x and want to improve performance to handle large amounts of traffic.
There are also legal and ethical issues related to traffic, for example in the case of data handling. The public part will be fully anonymized data, as soon as something is stored, everything is already anonymized at the resolver level. If the resolver is going to be used by a telecom company, that company is the data controller and decides how to handle the data. In the case of government use, the situation is reversed, where data is needed to be able to detect phishing attacks, for example. That is where all the data is collected. But this scheme is not intended for end users.
Access to the source data can be granted by consensus of the whole consortium and after strict conditions have been met. We will only support something like this if there is a security or infrastructure benefit for the countries of the European Union. Information on individual attacks is also shared within the Union so that new attacks can be responded to as quickly as possible. The main platform for data exchange is the MISP platform to which most security teams are accustomed.
Filip Hron: Attacks on private networks in 2023 from the perspective of the Turris Sentinel network
Turris Sentinel is a set of components that collect attack data. One part collects the data and the client part applies protection. The protection is then done in real time using a dynamic firewall. The source of the data is so-called minipots, which attackers try to access using FTP, SMTP, telnet and HTTP protocols. The data is categorized and displayed on the Sentinel View website, where a preview of the dynamic firewall is available, but also a password check against the passwords used by the attackers.
Some of the most common passwords used by attackers include number sequences starting with one, modified variants of the word Password, and key sequences like QWERTY. Attackers also vary in their password testing methodology, for example, some try one password a day, then try another the next day. Most unique attackers are from China, followed by India, the United States and Brazil. We don't want to point fingers, these are the countries where the attacking IP addresses come from.
Dmitry Belyavskiy: Post-quantum transition: problems for popular protocols
The consensus among experts is that quantum computers will break traditional cryptography. This means that previously recorded encrypted communication can be decrypted, so the whole world is working on post-quantum algorithms to solve it. For example, algorithms are being designed by NIST in the US, working groups within the IETF are working on standardising protocols, and there are dedicated groups within OASIS.
It is expected that classical cryptography will be broken, but the new schemes are not yet proven and nobody is really too sure about anything. Most commonly used today are so-called hybrid solutions that combine traditional approaches with new adaptations that are supposed to be robust.
Of course, new algorithms bring a number of expected compatibility problems. For example, various middleboxes do not recognize new algorithms and therefore block them as unknown. They also increase the size of encryption keys and have lower performance, so you have to be prepared for the necessary changes.
Another problem is the aggravation of the amplification problem, as larger keys produce multiple responses that can be blocked in some networks. It will also be necessary to investigate algorithms to address link congestion in TCP, where historically MSS grows to around 10, but due to the increase in round-trips it would be interesting to investigate higher values. CDNs now offer higher MSS, as does QUIC which has its own implementation that will need to be re-examined.
Other issues will arise in individual protocols, for example in the case of DNSSEC we can't fit longer signatures into a single packet. It is proposed to split the data at the application level, but we need to do more research around this. Current Linux distributions already offer some post-quantum algorithms, Fedora 39 was specifically mentioned.
Jan Žorž: Stairways to Anycast
Unicast means one-to-one, anycast means one-to-nearest. How to build your own anycast network? The plan was to build a prototype, measure, make adjustments, build a production version and you're done. We didn't build anycast because we needed it. We just wanted it and we were curious.
Different resolvers were used: BIND, KnotDNS, NSD and PowerDNS. BIRD was deployed for BGP routing and the dnsdist load balancer for load balancing. To translate the configuration for different servers we wrote our own Python script. On each node, there is a simple bash script that queries the local resolver and drops BIRD if there is a problem. When the resolving is not working, the node should not be visible in BGP. A single ASN is used, from which three IPv4 prefixes /24 and three IPv6 prefixes /48 are announced.
In the beginning, the whole solution did not work as expected. But we didn't know why, when and where it wasn't working properly. We used the RIPE Atlas network, which allows thousands of probes to reach targets and measure results. Every car needs a tachometer, so we monitor and track every node and its performance in detail.
But Anycast means new problems, for example getting a certificate from Let's Encrypt is a problem. There is no way to ensure that communication from an authority ends up at the node that invoked it. So we have to proxy the configuration from all nodes to the one running Certbot.
Using anycast, it is possible to run other services, not just DNS. For example, the possibility of running a distributed SMTP server, a replicated database or a replicated email repository was mentioned. We are still experimenting with this in the lab when we have time.
Tomáš Hála: News on DNS anycast for the national .CZ domain
The DNS anycast of the CZ.NIC association is now in 20 locations in 13 countries. We cover all continents except Antarctica. It primarily serves national domain traffic, but hosting is also available for other TLDs or, more recently, second-level domains in .CZ.
Anycast was built with availability and robustness in mind to withstand various network issues. It needs to be resilient to attacks and various errors that may occur. The entire service currently runs on 75 servers.
In the previous year, the hardware was reinforced in Frankfurt, which is a very important location. In Milan, on the other hand, we optimised downwards. A second large DNS stack was also built and instead of 30 servers with 10GE connectivity, only 10 servers with 25GE connectivity were used. We started using XDP, which allows us to make better use of the hardware and achieve higher throughput with fewer servers.
XDP allows us to serve many times more queries, but it does create problems when trying to achieve software diversity. The problem is that no other server besides our KnotDNS supports this mode.
Administrators continually perform performance testing of the entire stack to make sure everything is working as expected. We also keep track of which countries we have how much traffic from and what kind of delays we have. The goal is to get response times below 75ms, especially in the most exposed locations. We have three times more queries from America than from the Czech Republic, which is understandable because of the largest resolvers. These are run by Google, Cloudflare and Microsoft.
The plan is to add a new site in the United States, add another European site, and a new large DNS stack in a non-public site. We would also soon like to connect to NIX.CZ using 400GE, but that also means beefing up the large stack to be able to handle that traffic.
Alexander Zubkov: Network simulation for testing
Qrator Labs operates a service to protect against DDoS attacks, the entire network is built on Linux, including the nodes themselves, but perhaps also the network elements. We have everything automated and formally described using automation tools. This allows for comprehensive testing of new features before deployment.
It is of course possible to test on a single device, but it is not possible to verify how nodes communicate with each other or how they propagate prefixes, for example. We could create a real network, but that doesn't scale very well and is a lot of work. It's also possible to create a virtual simulation of the whole network where you can create a number of nodes and watch what happens there. But what do we use to do this simulation and how do we set it up?
In the end, the solution chosen was Conteinerlab, which uses Docker and has ready-made images with different operating systems. The configuration is in YAML and makes it easy to run and manage a large virtual infrastructure. The tool starts a set of containers and creates the required network interfaces between them. The same templates that are deployed on production can then be used to generate the configuration of the virtualized nodes.
The IaaC (Infrastructure as a Code) approach is a good thing and allows a lot of things to be simplified and tested in an automated way. It does require some programming skills and it's a lot of work, but it's useful. It's also useful to add tests continuously as new features are added. Then when you want to test, you won't have to write all the tests at once.
Marian Rychtecký: NIX.CZ monitoring and statistics
The technicians at NIX.CZ gradually came to the conclusion that they do not want to configure network elements using the command line. We want to be able to work with the command line over SSH, but we don't want to use it for automatic configuration. So it was decided to use a REST API called the DME API, which is very fast and allows us to respond to commands in milliseconds.
However, it was necessary to create a fairly complex translation layer that allows traditional commands to be converted to JSON format. Nexus has its own compiler, but unfortunately not for all of them. Documentation exists, but it is not in an ideal state. We had to figure out a lot of things ourselves, but I dare say we have 98% of it mastered.
A Python library was developed to communicate with the network elements, which communicates with the DME API and retrieves data from Netbox. Then everything is combined and the information is stored in InfluxDB. There we read it and use it to display the data in our systems. This all happens every 30 seconds. It would be possible to read the information every second as well, but so far we don't see any benefit in that.
In total, information from 37 devices and 2339 network interfaces is read in this way, generating 63080 metrics every 30 seconds. We read all the information for 700 milliseconds and store it for another 250 milliseconds. So far, these are just counters, but the plan is to augment the data with more information.
The database already grows by 4 GB every month, but it doesn't make sense to keep the data like this forever. We take advantage of the features of modern databases that allow data aggregation. This creates daily, weekly, monthly and yearly statistics. This reduces the size of each set to tens of megabytes.
The advantage of such detailed data collection is more detailed flow data on individual interfaces. Previously, we only tracked average flow, but as a network designer, I am interested in how much data is actually flowing when designing a network. It's all the data that needs to be transferred, not just the average. In this 30-second data, we can see that we are transmitting significantly more data than the average on the ports.
Zbyněk Kocur: iPerf3 or Measuring just the maximum throughput is nonsense
iPerf is available in two versions that are being developed in parallel: iPerf2 and iPerf3. The coverage of operating systems is very broad but not uniform. Not everything works on every system, and not everything works as the user would like it to. Measurements run in client-server mode, where the client sends traffic to the server and displays the measured data.
The main difference between the second and third versions is the different output format. While one outputs the data in CSV, three uses JSON. The older version must also have the same parameters set on both sides, while version three already transfers the measurement information over the network. With number two it was only possible to get data from the client side, number three allows you to download the log from the server as well and get more data.
The iPerf3 is designed as a meter that is capable of utilizing the latest transmission technologies. If you set it up properly, you can use it from units of megabits to hundreds of gigabits. However, you must configure not only the tool itself, but also the operating system underneath.
The resulting transfer rate depends on the line throughput, delay time and error rate. For links with large delays, for example 600 ms for stationary satellites, we need to have a TCP window set to tens of megabytes to be able to achieve reasonably large flows of tens of megabits. Just add an error rate of only 0.25% and the bitrate will drop to a fraction of the speed on a link with that much delay.
Many meters today have built-in measurements according to RFC 6349, which describes how TCP measurements should be performed. However, there is no specific guidance there. Furthermore, the document is from 2011 and many of the recommendations are no longer valid today due to new congestion algorithms. For example, it suggests that TCP makes sense to measure up to a 5% loss rate, but you can see that even a small loss rate affects traffic. But the world of protocols is evolving so fast that no one has to update methodologies.
Petr Špaček: Meaningful measurement of DNS server capacity
The problem with simple measurement is that we are measuring in an unknown environment, namely a setup server that returns some answers. But what if in all cases the server responded with an error and didn't give meaningful answers? It's happened to us before.
When we measure DNS, we need to distinguish between the authoritative server and the resolver. They are completely different pieces of software that just speak the same language. Imagine one is a cow and the other is a horse. They are two completely different species that have very little in common. They both eat grass, so the input is the same. But that's where the similarity ends.
For example, if we measure a resolver, we have to take into account that its state changes over time. A cache has a finite lifetime, so as time passes, its contents change. That's a nightmare. So when measuring, we also have to take into account the timing of sending each query.
The problem is also the data we will use for the test. The queries have different prices and processing times, for example. It is therefore necessary to have a sample of real traffic and not just send the same queries over and over again. In addition, for resolvers we need to have real data including real timings. When it comes to web traffic, for example, we have to take into account that browsers also have their own cache.
Then DDoS attacks are a separate chapter, they usually choose the most expensive queries and there may be some problem inside the server and performance will drop. Another problem is server administration that takes place during operation. Include such an action too, because in the lab it may look good, but once in production the administrator adds a zone, the server may slow down.
There are a number of tools for measuring this: dnsperf, kxdpgun, resperf, shotgun, and others. Beware, however, that not all of them are suitable for all tests. Sometimes the documentation also says something that doesn't really work.
The advantage of the dnsperf tool is that it is easy to use but not very powerful. It is therefore not suitable for measuring attacks, but it gives intermediate results and is good at measuring latency. On the other hand, kxdpgun is extremely powerful and good for measuring attacks. It has worked well for me to combine the two, simulating normal traffic with dnsperf while rolling an attack with kxdpgun.
The important thing is to check your own measurements, for example throw out the real server and replace it with some responder like dumdumd, which is a simple packet repeater. It's very simple, it doesn't add any processing delay. It is possible to discover problems in a flat CPU load or an unsuitable measurement tool by doing this.
Performance tests don't make sense without testing the test environment itself. You'll probably come out with nonsense without it.
Kryštof Šádek: Performance tests of the 400G DNS stack
A DNS stack is a group of independent servers that share the same communication. They do not communicate with each other, but are connected in a single network environment. They respond in the same way and load distribution between them is provided by BGP multipath.
CZ.NIC has built a new DNS stack with 400 GE connectivity, but in practice it receives a normal traffic of 14 Mbit/s. Therefore, we had to create a test traffic to verify the accuracy of the theoretical capacity calculation.
The internal capacity of the network and production servers did not allow to create sufficient traffic. However, we had hardware ready to build another stack, so we decided to use it. In the end, not all servers were tested, but only three were connected. Ten servers were generating traffic.
The testing simulated real traffic that includes queries to the .CZ zone, a NXDOMAIN to NERROR ratio of about 8% and an IPv4 to IPv6 ratio of 66%. The xkdpgun tool was used for testing.
Without DNSSEC, the stack can handle 240 million queries per second. On the other hand, if we query only the records for DNSSEC, the performance drops to 127 million queries. With a realistic 20% query rate for DNSSEC, we get to about 210 million queries every second. We haven't tested a lot of this and there is certainly a lot of room for further testing.
Oto Šťáva: New features in Knot Resolver 6.x
Knot Resolver is an open-source DNS resolver, it is modular, has a fast thin core and allows you to add modules written in C and Lua. The resolver is single-threaded and uses operating system services to scale to multiple cores. However, this approach makes it difficult to aggregate statistics and metrics. Process management is based on systemd, but it is not available in all environments.
Modularity does allow for decoupling advanced functionality that is not an unnecessary burden in normal operation. However, modules must be explicitly loaded in Lua before they can be used.
Configuration written in Lua, while very powerful, is difficult to grasp for most users' needs. Also, some errors may appear after the resolver has been running for some time.
Version 6 attempts to address these issues and is now available for public testing. A new manager written in Python has been created to manage the processes, which also collects statistics and metrics. There is also a new declarative configuration written in YAML that has more rigid rules and is comprehensively validated. It is possible to know in advance if the configuration is correct or not. The new configuration format is also easier to grasp for most users. Lua, however, is not going anywhere, but is used for internal purposes.
The new Manager unifies process management across all environments and can automatically reconfigure processes. Knot Resolver can't do anything like reload, but it can replace instances one at a time without failure. Manager can also use HTTP API to change configuration or read statistics and metrics.
The developers are now getting feedback from testing, and they would like to release a sharp version in the first quarter of this year. The new Knot Resolver should also be deployed on public ODVR resolvers operated by CZ.NIC.
Lukáš Šišmiš: Securing networks with Suricata (7)
Suricata is a very powerful open-source network monitoring tool. Rules can be embedded in it, and alerts are then generated. However, even without these rules, information about individual traffic events is generated.
It is possible to prepare passive monitoring, where we divert traffic on the switch to create an IDS detection system. But we can also create active monitoring, which creates an IPS that blocks unwanted traffic. Standard YAML and JSON communication formats also allow for easy integration with SIEM systems like Elastic or Splunk.
Suricata also allows you to extract data from different logs. For example, if someone downloads a file, it can extract it, create a hash of it, and compare it to a database of malicious files.
Over the past two years, Suricata 7.0 has been developed, advancing the transition from C to Rust. This is a good direction that we want to continue in. We added XDP support, conditional PCAP logging, HTTP/HTTP2 header inspection, a Bittorrent parser, and increased performance.
Tomáš Procházka: Moving a data center during full operation
The Nagano data centre was ceasing operations in 2023 and Seznam is growing steadily in terms of data and computing. "It was clear that we had to leave Nagano and move somewhere with 41 tons of hardware." In the end, building its own datacenter won out, allowing it to cheapen operations, shed long-term lease commitments and customize its own space. "We also wanted our own monitoring, which would allow us to collect more data."
While Seznam.cz operated the Kokura and Osaka datacentres, the new datacentre was named Nagoya. All that was left was a small matter: finding a site that was large enough and offered the possibility of a good power supply. "We have been promised up to 6 MW, and so far we have a consumption of about 0.6 MW." PUE has been around 1.15 for a long time.
During the move, 2,725 servers had to be moved, with two seagulls per day moving from Monday to Thursday. "We kept Friday free in case there were any problems." During the move, some nodes were in the old datacenter and some were already in the new one. "Services were still running without any downtime or major outages."
Every morning two gulls were disassembled and transported, at the finish line they had to be reassembled and gradually plugged in and switched on. "Our colleagues plugged in the switch first so we could run the migration script." This figured out the IP address, checked the hostname change and created a minimal configuration for the switch to connect to management. It then ran another script that performed the reconfiguration. Ansible and Python were used for automation.
In the end, the migration was very smooth, apart from a few minor issues, everything went well thanks to the intensive preparation. "Thanks to the automation, everything went smoothly." The connections between datacenters handled all the synchronization and communication of the cluster.
Ondřej Zajíček: BIRD, MPLS and EVPN
MPLS is used to transmit packets in a different way than IP. "We mark the packet with a meaningless identifier on the input to the network and discard the header on the output." It is a technology that runs between the link layer and the network layer. The advantage is that it allows for very fine traffic handling, the implementation can be very fast, and it allows for explicit separation of individual flows.
The disadvantage of MPLS is that managing such a dynamic network can be more complicated. "You have to distribute information in the network about which flow corresponds to which label." Protocols such as LDP, RSVP-TE or BGP are used to distribute this information.
MPLS has been fully implemented in the BIRD daemon since version 2.14. "The modular form of BIRD assumes that we are in the IP world. That's why we thought for a long time about how to embrace MPLS." In the end, a BGP-only solution was chosen, with no LDP or RSVP-TE support available at this time. "But it is possible to use BGP as an internal routing protocol, then you don't need another protocol." But sometime in the future, other options will probably be implemented.
There are MPLS tables that actually correspond to IP tables and also allow you to export information to the system kernel. "We have introduced route attributes that allow you to define rules for accepting labels."
EVPN is essentially a distributed bridge, where a network spread across multiple routers can behave as a single environment. In doing so, it needs to signal the network state and transmit MAC address propagation. "You do that over BGP, then the data itself flows over some encapsulation." In addition to the MAC, the logs can also contain VLAN information. "We're still working on it, there are still some kinks."
Ľubor Jurena: Modern network configuration with systemd-networkd
The systemd-networkd tool is part of the entire systemd ecosystem and runs as a standalone service. "The goal is to reduce the dependency on other system libraries and prepare the entire configuration in one place." It allows you to configure physical interfaces as well as virtual network devices. "Interacts with other systemd components such as systemd-resolved."
The /etc/systemd/network configuration file is written in systemd-specific syntax and is divided into several sections. The basic configuration is very simple: first determine which interface you are working with and then define its properties. If we want to include an interface in the VRF, we don't have to do it with any post-scripts, but we can do it directly in the configuration. It is also possible to use a star entry that covers multiple network interfaces.
IPv6-only run mode is supported, it is possible to use prefix delegation and simply enable packet forwarding between interfaces for example or use masquerade. The networkctl line tool can be used to control it, allowing you to load the configuration, change options and view DHCP information, for example. "By simply running it, we can see which interfaces are available and which ones networkd manages."
Pavel Šimerda: Linux: hardware switch support
In 2008, the DSA feature was added to the Linux kernel, which allows you to mark packets going from the CPU to the switch, which is part of the board, for example. "We then have metadata that allows us to use the output ports as separate network interfaces." Later, so-called bridge offloading was developed, which allows maximum configuration to be transferred to the hardware. "Even if the hardware doesn't support it, there is still a software bridge available." This, of course, does not have the same performance as hardware.
In 2017, there was a lot of excitement about Linux distributions designed for deployment on switches. "The enthusiasm gradually cooled down and not as much ended up happening in the following years." In 2021, DSA support came to OpenWRT as well.
DSA mode lets you decide how you handle which traffic. Whether it's frames that you want to process in the processor or whether you want to let the switch chip process them. Protocols like STP, RSTP, MSTP, LLDP or even LACP are a special type of traffic. These need to be processed in the CPU and not by standard processing using VLANs.
In 2022, the ability to set the MSTI state on individual ports was added. This solves the problem of some links being disconnected under regular STP. Using MSTP, you can split traffic by VLAN and send different traffic over different links, which you can load evenly. "The original implementation was very inappropriate from a standards and hardware perspective."
Václav Nesvadba: Network tuning alternatives
We still have to use IPv4 and there are not many of those available. "So we try to use them as efficiently as possible." For example, if we're connecting two elements in a network, we usually use the /30 prefix, but where we lose half of the allocated addresses. So it's better to use /31, but not all elements support this setting. "MikroTik has an undocumented solution for this, in Linux it works automatically."
Does it make sense to save IPv6 addresses? "There are billions of them, but it still makes sense because of the faster transition and the ability to avoid unnecessary waste." It's also important for neighbor caching attacks. In it, an attacker tries to generate traffic to all addresses in a given range. "Legitimate addresses can be dropped and then become unavailable. We've tried it and it works."
Sometimes a customer only wants one IPv6 address, but usually they are given a huge range. "It seems a shame to me that he only uses one." So it's possible to create a range of /120 and add customers from that, for example /124. "If there are more customers, we just stretch the mask." Using the same values at the end of IPv4 and IPv6 addresses can also help in troubleshooting. "It's clearer than using completely different addresses on the interface."
Ondřej Caletka: A look behind the scenes of the network for the RIPE meeting
The RIPE meeting is a bi-annual meeting for more than 600 participants from all over the world. "We carry our entire Wi-Fi network with our own IP addresses." There is its own autonomous AS2121 system.
Users often complain about geolocation problems. "Geolocation services count on APs not moving on Wi-Fi." But here, the network is moving and private companies maintain lots of lists of IP addresses and their location.
RIPE has made a deal with Google and started publishing a google.csv file on its website, where address ranges and their locations are published. "It was so popular that RFC 8805 was created to do this." But you still need to reach out to the major geolocation data providers and reach out to them whenever there is a change.
At the heart of the conferencing network are two small SuperMicro servers running 25 virtual servers that provide routing, firewalls, DHCP servers, DNS resolvers and Wi-Fi controllers. "The rest is just L2 switches from Juniper, Zyxel and MikroTik." The backbone is made up of 10GE ports, but typically traffic peaks at around 800 Mbits.
The whole network runs on open source: the router is made by BIRD, the firewall is run by nftables and further inside runs Knot Resolver, Kea, Jool and other tools. "It's all orchestrated by Ansible."
The public network is operated in IPv6-bridge mode, where modern devices do not need to draw IPv4 addresses. In practice, this saves most of the IPv4 addresses, with approximately one hundred instead of six hundred. There is also a network offering only IPv6 and a classic dual-stack network. Then there are several management networks, a separate network for elements and a small separate network for video streaming. "It's quite a few different networks."
Lubomír Prda: OpenBMP - what the heck was going on in my network (lightning talk)
BMP stands for BGP Monitoring Protocol and allows you to get BGP information from the router to the analyzer. "It lets you know that you have a problem on your network." One of the collectors for the data is called OpenBPM and was originally called SNAS. "You may still see this in some commercial devices that say they can be connected to SNAS."
One of the outputs is Looking Glass, which lists details of the IP address entered. "It also pulls in information from various databases such as geolocation and others." This information is downloaded in the background so it is available even when the original source is not.
It is also possible to analyse BGP details, where for example the number of updates from each peer can be tracked. A complete history is available, so that it is possible to retrospectively examine how various changes have occurred and why network behavior is changing.