Back at it for the 3rd year in a row!
Preparation Start: 2024-09
Competition Date: 2025-03-22
Cyber Conquest is a purple-team cybersecurity competition where teams defend their systems while attacking other teams' systems utilizing both offensive and defensive skill sets!
Quick TLDR of what Cyber Conquest is and glossary for some terms
Computer Club Wiki / Cyber Conquest website / DakotaCon Website
The first year I ran this (2023) I had been directly running everything, primarily developing all of the physical services, and kind of micromanaging everything. I had recruited a team of about 8 people to help build out boxes This management method was not sustainable, particularly with me no longer being on campus anymore, and because of the sheer time commitment.
Last year (2024) I started with a few close friends who had demonstrated good work and dedication1) the previous year to be my 'admin team', hoping that this team could specialize to take specific loads off of my plate, and lead to a better competition over all having a 'board of directors' making decisions instead of just me. That idea worked quite well for 2024, and my admin team was all onboard to do it again this year! It was very helpful to have a whole team of people working on the admin details for running this event, Gillian took care of the communications with teams, advertising, and documentation, Irina lead many of our meetings and helped to organize us, Tristan was the technical boots on the ground helping with wiring and soldering, and Brandon supplied valuable knowledge and insights into our planning process. The 5 of us worked quite well together, planning out the weeks goals in a Tuesday planing call before our Wednesday full teem meeting.
Back in September I assembled my core admin team to run this competition. We had a couple of meetings figuring out roughly what we were going to do before recruiting ~12 more people to help with build out. build cycle the making of coordination and planning setup week setup night
We decided on a fallout theme this year. Teams were given 2 scored laptops and 2 kali laptops. All 4 of them had ethernet, meaning that all of them were connected directly to the team's network.
System Name | Operating System | Type | Important Services | IP |
---|---|---|---|---|
Firefall Citadel | pfSense | Virtual | Firewall | 192.168.0.1 |
Nyx Station | Ubuntu 24 | Laptop | Bridge Control | 192.168.0.23 |
Starlance Station | Debian 10 | Virtual | News Aggregator | 192.168.0.24 |
Eclipticon | Windows Server 2019 | Virtual | Domain Controller / DNS | 192.168.0.42 |
Cyrillium Heights | Raspbian2) | Raspberry PI | Traffic Lights, Wireless | 192.168.0.66 |
Neonspire | Windows Server 2019 | Virtual | Domain Controller / DNS | 192.168.0.68 |
Echo-Solis | Windows 10 | Laptop | Crane Control | 192.168.0.69 |
Kali | Laptop | Haxzors | DHCP |
With the recent CUPS vulnerabilities being released right as we were starting up the build process I thought it would be great to try to incorporate it into the competition. This forces an interesting change to the paradigm that most competition services operate under. Normally the service stays passively waiting until a connection comes in, processes that request, then goes back to waiting. This restful paradigm is great for most competition services web/ftp/ssh but does not work so well for this vuln. The vulnerability works by the attacker reaching out and registering a malicious system as a printer, then crucially relies on the 'user' to print something using their local cups service. I had separately wanted to do some sort of display screen3) this year. Ethan took both of these partial ideas and combined them to build out the billboard service. Every minute the service pulls the scoring engine for information about the state of the team's services, then uses the local cups service to print out a humorous message to the ipp server on the scoring IP on the teams network. The scoring pi then displays the message on a small screen.
During one of our initial planning meetings we were discussing infrastructure that would have neat moving parts, and a raising bridge became an obvious choice. A group of people from the ops team, (Collin, Gillian, Liberty, Cutshaw) took on this service and built an IRC controlled draw bridge. The server communicates over serial to an Arduino to drive 2 servos that raise and lower the bridge.
This was a completely new medium for us to work in. John found rpitx which can directly broadcast a signal between 5 KHz up to 1.5 GHz from just a GPIO pin. Using a RTL-SDR RTL2832U, that signal can be picked up and decoded. The service that John ended up creating had a webapp receive a message from the scoring engine and then transmit that as a POCSAG (pager) message. The scoring PI then receives that message, and decodes it to ensure that it is correct.
This service worked great in testing with 1-2 teams but started to show some signs of problems when I finally started doing larger scale testing on the Thursday of our setup week. Some PIs would start to drift off of their claimed transmit frequency after some time running, some would produce very noisy signals that could not be properly received, 1 went from transmitting seemingly fine to just raising the noise floor whenever it was transmitting with no clear signal. In addition to the transmission errors, we were getting interference leading to failed decode when running with more than 3 or 4 at a time. Even upping the channel spacing from the 12.5 kHz spacing of the POCSAG spec up to 50 kHz did not seem to mitigate the problem. Eventually I found that 100 kHz spacing worked okay, but still had frequent decode errors when all 10 teams were transmitting at the same time.
Me stressed out at 1am googling things on my phone instead of using one of the 7 different computers DIRECTLY IN FRONT OF ME!
Many hours into testing and debugging on satup day, the wireless service seemed to be working okay with the occasional reboot or hardware swap needed4), until we met the largest problem yet. Randomly at around 11pm on setup night most of the 20 PIs we had setup started to boot loop. I think the boot looping was caused by an under voltage because we were using 1 amp power blocks instead of 1.5 or 2 amp like is recommended. Eventually around 2am, we finally decided to stop debugging and give up on the service for this year. All of the components were still in the environment, but we did not end up scoring ethem because we never got all 10 teams to work at the same time.
We used the crane service that we made last year, with some improvements by Benjamin, Collin and Will. Banjamin wrote an awesome front end for controlling it.
Same traffic lights as last year. Brandon resoldered them to be a bit less jank than last year, and replaced a few leds that had blown out.
A somewhat last minute addition, we made a scoreboard with lights in it to have a physical representation of our scoreboard. This is something that NCCDC has done for a while and I always thought would be fun to do. It turned out quite well and people loved it!
The virtual systems were hosted in the DSU IALab5). There were two main competition networks, a 'Blue' network connecting all the teams' networks and a 'Red' network with the scoring engine as well as virtualized kali boxes and some laptops. Both networks were connected together through the DefSec Router where traffic could be monitored by white team. Each team had their own virtual network holding their router and a few virtual systems. Each of these networks had its associated vlan trunked to the competition room and split out to physical ports on a switch in the room. Each team then had a switch at their table to connect the laptops and raspberry PI, which was connected back to the main switch. All of the team tables were arranged in a circle around a center table that had the actual physical systems. On the center table I had a scoring PI that used multiple MCP23008 GPIO expanders to get enough GPIO ports to support the traffic lights.
Radio is neat!!! I wish that we had spent more time testing and had realized that there might be problems with transmitting 10 different weak signals through GPIO pins.
We need more structured planning, particularly for the setup day. So many people want to help but do not know what there is to do, or where things need to go.
Better time management! Specifically, I need to not waste 15 hours on testing wireless things. After a few hours I should have focused on getting everything else ready and working, then came back to wireless if there was still time.
We really need to do different passwords for teams. Most of the competition was teams using default credentials to log into other teams' boxes and mess with them. Having different credentials for each team would force us to make vulns that were actually reasonable and achievable in a short time. Additionally, we need to have box authors test that they can get system access through the vulns, and document the process. We are marketing the competition as all levels, beginner-friendly, meaning that we need to have easily achievable vulns for beginners.
I really appreciate my admin team! It is so nice to have a good team of reliable people to work with to create this competition. Even with a great admin team, we could certainly not have done this without the awesome work from the whole operations team.
And of course a major shoutout to the awesome faculty of DSU! Especially Beacom Wizard Tom for supplying us a space to work, Cloud Master Eric for the main network setup, and Conference Coordinator Kyle who is the main person leading Dakota Con.
7am selfie when we finally got everything set up!