Author: tarakiyee

I'm a public interest technologist working on critical FOSS infrastructure, standards, and transformative potential of information technology.

The FCC is coming for BGP, what about the EU?

The Border Gateway Protocol is an important part of our internet infrastructure. It’s essentially a big set of rules that govern how data is routed around the many networks that form the internet. If DNS is the address book of the internet, BGP is the Autobahn.

For the longest time, BGP ran on trust and a dedicated community of operators, however this means that it left opportunities for abuse. A famous example is when Pakistan Telecom pretended to be Youtube for a while because they wanted to block the website in their country, but since they abused BGP they ended up making Youtube unavailable around the world. There has also been a couple of high profile BGP hijacks that aimed to steal cryptocurrency.

I just read George Michealson’s blogpost on the APNIC website, which talks about how a recently published FCC draft is causing alarm in the technical community about potential regulation coming to the BGP space. It even prompted a response from ISOC. George Michealson notes that despite the protests, regulation is very likely, noting:

“However, when it comes to BGP security and the potential risks posed to the state, the light-touch approach may reach the limits of risk that a government is prepared to accept without intervention.”

read the full blogpost for more details


It made me wonder, what about BGP regulation coming from the EU? They’ve certainly haven’t been shy about technology regulation the past couple of years, especially when it comes to security. I scoured all the resources I can think of, but I can’t find anything public for now. However ENISA, the EU’s cybersecurity agency, seems to be on top on things. The topic of BGP and RPKI (a security feature for BGP) was featured earlier this month at the ENISA Telecom & Digital Infrastructure Security Forum 2024, presented by Jad El Cham of RIPE NCC.

As far as I can tell, I haven’t found any references to BGP regulation coming from the union, but it’s worth noting that there is already existing regulation that empowers ENISA and national authorities to supervise the same type of BGP security measures that the FCC is now considering, based on the European Electronic Communication Code (EECC) as well as the Network and Information Systems (NIS) Directive. As covered in this ENISA publication

This work on BGP security was done in the context of Article 13a of the Framework directive, which asks EU Member States to ensure that providers take appropriate security measures to protect their networks and services. For the last decade, ENISA has collaborated closely with the EU Member States and experts from national telecom regulatory authorities (NRAs) which
supervise this part of the EU legislation, under the ENISA Article 13a Expert Group3.

ENISA- 7 Steps to Shore up BGP

That seems to indicate to me that the regulatory need might be a bit different in the EU than the US, but I wonder if still heavier regulation for BGP might be in store depending on how the FCC process goes.

Do you know more about the EU’s plans in regards to BGP regulation? I’m interested in learning more, please comment or reach out.

More on BGP:

What I Learnt from What We Learnt from the xz-utils Incident

I don’t know how your April went, but if it was anything like mine, you would have spent an uncharacteristic amount of time talking about compression tools, “insider attacks”, and build tooling. That’s because on March 29th, 2024, a backdoor was discovered in the widely-used data compression tool xz-utils.

The xz-utils backdoor (known as CVE-2024-3094 in some circles) exploited OpenSSH's authentication routines in specific operating systems running glibc, and it was hidden within build scripts and test files, making it harder to detect than usual. I'm not talking about the xz-utils incident in this blogpot, I'm talking about how much we talked about xz-utils. 

The concept of the attention economy, introduced by Herbert A. Simon in the 1970s, revolves around the idea that human attention is a scarce and valuable resource. In an age where information is abundant but our capacity to consume it is limited, attention has become a commodity. Companies, advertisers, and media outlets all compete to capture and hold our attention because it drives what they need, whether it’s engagement, revenue, or influence.

In cybersecurity, this translates to a cycle of intense, short-lived focus on new vulnerabilities, followed by a rapid shift to the next emerging threat. What people do with that attention varies, either they want to sell you a product or an idea, pay their newspaper subscription, or simply to gloat that their flavor of technology is better than whatever the other people are using.

The xz-utils incident is not the first example of the industry’s reactive nature, the Heartbleed bug is the quintessential example. Heartbleed captured headlines, sparked endless discussions, and inspired a a plethora of ideas and quick fixes. But once the immediate danger was averted, and OpenSSL was “saved”, attention quickly moved on. But many structural issues persisted, and the maintainer burnout to major vulnerability pipeline continues to deliver.

I don’t know how we can break the attention economy cycle, all I know is when the next big bad bug happens, we need to resist being reactive and avoid quick fixes, and focus on bringing attention on the structural issues that continue to threaten our software. I’m proud of STF’s response for example.

I’m interested to hear if anyone has ideas on how to deal with the attention deficit and moving to a proactive stance. The xz-utils incident was not a wake-up call, if anything it was hitting snooze on your alarm for the 100th time. Rather than allowing the latest crisis to dictate our focus, we need to prioritize long-term, sustainable maintenance of our digital infrastructure, and to get there we need to invest a lot more time, resources, and people into our critical infrastructure.

SconePro with Network Jam and Clotted Streams

Last week I attended the IETF119 meeting in Brisbane (remotely), and I attended a meeting for a newly proposed working group called SCONEPRO where some internet service providers and large video content platforms want to work together to make the controversial practice of traffic shaping work slightly better. Here are my notes and thoughts. I would like to thank Mallory Knodel and Daniel Kahn Gillmor for their input and helping me make sense of all of this.

Background

The creatively named SCONEPRO (Secure Communications of of Network Properties) meeting was held on March 21, 2024 as a working-group forming BoF (Birds of a Feather) at the IETF119 Brisbane. BoF meetings like these are prequisites to setting up IETF working groups by ensuring there is enough interest within the community and that the IETF is the right place for standardization.

SCONEPRO aims to develop an internet protocol to deal with a particular use case: Network Operators, particularly mobile, often employ methods such as traffic shaping to control the flow of traffic when there is a high load on the network. This can interfere with how some applications run. SCONEPRO is particularly concerned about video applications.

Why video in particular? Not only does it form the majority of internet traffic by their estimation, video streaming applications often allow the client to adjust the bitrate (colloquially, the “quality” or “resolution” of a video), in order to reduce its impact on a congested network.

End users, through client applications, have no way of knowing for sure that their traffic is being shaped. Certain solutions exist to figure that out, but application developers argue that they are complicated and costly. At the same time, network operators usually have no way of telling what traffic is video traffic because transport encryption is so ubiquitous.

The SCONEPRO working group if established would develop a protocol that allows a network to communicate to a client application about whether it wants to do traffic shaping, and announce the bitrate that the network is willing to allow. This gives the client the option to artificially reduce the video quality on their end. They argue that this would provide a better “quality of experience” (QoE) to their users.

What Happened at the Meeting

The meeting started with a short explainer of the goal of the BoF by the chairs. I’ll give a summary of my notes and impressions, but if you’re interested to see for yourself refer to the video at this link. You can also find links to the official notes for the meeting and the slides here.

How Shapers and Policers Work

Marcus Ihlar from Ericsson gave an overview of the current state of network shaping and policing and this is my summary of that talk. There are several reasons why a network might want to throttle video, for example bandwidth limitations and congestion controls. Also, more networks are moving from a data-cap model for charging users to a bitrate-cap model, in which users can pay more to access higher resolution media.

Client applications like video streaming services often employ a technique called adaptive bitrate (ABR), where they predict the capacity of the network then dynamically change the bitrate of the video to deliver it without interuptions. Networks see this as an oppurtunity to reduce the load on their networks, so they attempt to detect when a traffic flow is video, then use traffic shapers or policers to throttle the flow artificially.

The functional difference between a shaper and a policer is that the former adds a delay to network packets to spread them out over time and policers drop packets above it’s allowed datarate policy. Traffic shapers and policers often have the same end result.

Neither technique works really well because it’s not easy to detect video content because of encryption. Network operators often employ techniques to overcome that constraint with heuristics, DPI or trying to interpret the Server Name Indicator of the unencrypted initial QUIC packet, which is not always reliable. This means that the either the shaping or the ABR might not work as planned, creating a bad user experience.

Some internet service providers have agreements with large content platforms (like Youtube) that provide video to provide traffic shaping that works more consistently but these are all proprietary.

Meta and Ericsson Experiment

Matt Joras from Meta presented the results of a feasibility study conducted by Meta and Ericsson in which they developed a SCONEPRO proof of concept. They implemented a MASQUE proxy that connected a Facebook app and a Facebook Video Content Delivery Network (CDN) server. In addition to facilitating the transfer of traffic between the CDN and the app, the proxy server also introduced a maximum send rate signal. The Facebook app and the CDN then used the send rate signal value to manually limit their bitrate to fit the self-imposed network constraint. Their takeaways was that SCONEPRO is feasible and it results in improvements to consistent video playback, but only when compared to the experience with a traffic shaper.

Lessons from History

Brian Trammell gave a presentation on the history of PLUS, a prior IETF working group where a more generalized approach to on-path network property signalling was discussed, but ultimately faltered for the following reasons. While the generalized approach was considered by many participants in the process to be good engineering, it is created various unintended dystopic consequences when you add policy considerations to those aforementioned engineering considerations. The cited example was, when engineering a header to signal loss tolerance and flow start, it was possible in some cases to infer the age of the user from these network signals.

The recommendation based on the lessons learned was to keep SCONEPRO specific and to make it optional for clients.

Discussion on Use Cases and Scope

The second half of the meeting went into discussing the use case and potential scope of a charter. Here is a my summary of key inputs as I understood them. Don’t quote anyone directly from this without reviewing the video source, any embellishments are mine.

  • There were questions about the how to address network complexity, like if there are multiple shapers on the path, and the need to get the information from the box with the lowest bandwidth, which would be the actual bottleneck.
  • Jason Livingood of Comcast expressed some frustration with having to revisit the discussion on traffic shaping. He mentioned that there are other solutions, such as investing in capacity, and also referenced regulatory action in the US to ban traffic shaping. Finally, networks shouldn’t sell what they can’t deliver.
  • David Schinazi from Google said, “This is a case of the IETF ensuring our job security a bit longer.”
  • Ted Hardie also from Google and an author of an RFC on signaling highlighted that one principle for good signal design is that there should be no incentive to fake it. He also brought up the example of the spin bit in QUIC and how IETF engineers are good at identifying side-channel attacks. Tommy Pauly from Apple expanded on that by mentioning Ted’s RFC which has additional considerations for design of path signals.
  • Tom Saffell provided some insights from YouTube’s infrastructure experience and the challenges faced in implementing proprietary solutions to this problem, and said Google and YouTube are interested in working on this. YouTube are supportive of network operator efforts to reduce data tonnage. Wonho Park from Tiktok also expressed support for working on this problem, stating that traffic shaping is not optimal. There were similar supportive inputs from Suhas Nandakumar (Cisco), Jeff Smith (T-Mobile America), and Dan Druta (AT&T).
  • Martin Duke expressed some concerns about the effects of this on best-effort traffic. He acknowledged the arguments that would improve best-effort by reducing incentives to do clumsy things in order to traffic shape. He also expressed concerns about extensability to other use cases.
  • Lars Eggert, former chair of the QUIC working group, expressed concerns about how operators are enamored with adding complexity to manage capacity, and how that complexity is a lucaritve market for vendors. He also is worried about this being used to monetize bitrate discrimination.
  • Other concerns brought up were around scalability, security, and feasibility of any possible solution, including issues related to discovery and authentication of proxies. How do you know the box giving you the signal has the authority to shape your traffic? Running so much traffic over proxies might be expensive and, ultimately, it doesn’t replace the need for shapers and policers which network operators might still use for other purposes.
  • Stephen Farell, research fellow at Trinity College Dublin who studies security and networking, raised a concern about whether and how the security claims could be upheld, particularly client authentication to/of random boxes.
  • Tom Saffell (YouTube) mentioned some policy considerations that should be combined with the technical solution if they were to consider implementing it, namely:
    • Transparency to users: restrictions must be visible
    • User choice: buy a plan with no restriction
    • Equal treatment: wish to be treated as any other provider
  • Some comparisons were made between this and ECN (Explict Congestion Notification – RFC3168), however Matt Joras (Meta) made a point was that this is not explicitly a congestion issue, it’s an application layer signal, for example the network might be shaping traffic because of a subscriber policy.

Finally, there was a vote on whether the work group formation should move forward, 51 people voted yes, and 20 voted no, showing some opposition to this and lack of consensus.

Some Public Interest Considerations

Net Neutrality is the principle that Internet service providers must treat all communications equaly, and may not discriminate traffic based on content, particularly for profit or to disadvantage competition. Giving network operators control over bitrate, even with consent from the client, opens the door to violating net neutrality.

The fig leaf on traffic shaping is that it’s framed as a congestion control or a network capacity issue. One argument for traffic shaping has always been user choice: that users might want to prioritize a video call over updates downloading in the background. If it’s the end user’s choice as to what traffic gets shaped, and if they consent to it, then it’s no longer harmful traffic discrimination.

The problem remains that we have to take the network operators’ word that these techniques are only applied when congestion happens, and not to extract more profit, or push users into paying more for higher bitrates artificially. SCONEPRO offers a “QoE” improvement over the status quo in (physically or artificially) capacity constrained networks, but user “QoE” would also improve if the capacity of the network is increased. In the case of a protocol that requires opt-in from the application, this can lead to business partnerships that create a “fast lane,” which is another common net neturality violation.

SCONEPRO currently proposes some design goals in the proposed charter that might be relevant to these issues:

1. Associativity with an application. The network properties must be associated with a given application traversing the network, for example a video playback.
2. Client initiation. The communication channel is initiated by a client device.
3. Network properties sent from the network. The network provides the properties to the client. The client might communicate with the network, but won’t be providing network properties.
4. On-path establishment. That is, no off-path element is needed to establish the communication channel between the entity communicating the properties and the client.
5. Optionality. The communication channel is strictly optional for the functioning of application flows. A client’s application flow must function even if the client does not establish the channel.
6. Properties are not directives. A client is not mandated to act on properties received from the network, and the network is not mandated to act in conformance with the properties.
(…)
9. Security. The mechanism must ensure the confidentiality, integrity, and authenticity of the communication. The mechanism must have an independent security context from the application’s security context.

SCONEPRO is being framed as a solution to improve user experience, however most of the proponents seem to be telecom providers and major content platforms. I think SCONEPRO is a marginal improvement over the status quo in which traffic shaping is achieved with proprietary solutions and agreements between telecoms and major platforms.

One important consideration would be the effects of SCONEPRO deployment in different regulatory enviroments. In places where net neutrality protections are not robust, providing a “bitrate signal” or future signals based on use cases invented in the future may enable profited-based traffic discrimination.

It’s not clear how some of the desired properties of SCONEPRO such as optionality or not being a directive can be technically enforced, which means when looking at the effects of introducing such a protocol these design goals can be safely ignored. Client applications that implement SCONEPRO gain an advantage over those who don’t even if all the rules are respected, and if not, this opens the door to enable telecoms to more easily offer tiered services, zero-rating and fast lanes.

Ultimately, I do agree with the BoF’s premise that there is a problem to be solved but it’s not by encoding the status quo into the protocols of the internet. I think the practice of content-based traffic shaping needs to be better looked at and tackled from a regulatory and consumer advocacy standpoint. ABR traffic shaping and by extension SCONEPRO takes choice away from users and negogiates application parameters on the network in an opaque way to force data austerity on them.

Did you find this helpful or have some feedback? Would you like to see a follow up dive into similar prior work at the IETF like PLUS, MINUS, SPUD, or SADCDN? Reach out and let me know.

Hello W- nah just messing with you 🤣

It’s been a long time since my last blog post, and it feels so fucking good. While it does feel so incredibly good to be writing again, there is something so unfamiliar about my relationship to this space, my blog, and the internet in general. Which leads us to answer the first question I will answer today:-

Where did all the old blog posts go?

They’re all happy and alive, frolicking in a server farm far far away. In reality, the internet has changed, and so have I. In fact, the internet I used to write about never existed in the first place. It was fiction, almost naive fiction, presented as reality, and as we know, reality shows never age well.

I had to take the archive down because I couldn’t draw a line between the person I was in the 2010s and the story I want to tell now. They’re not purged, I want to curate a few of them and present them within context when I have the time, but until then, the only way to access them would be web archive or something.

Story you want to tell?

Yes, that’s what blogging is you silly pants! I’m just in a very interesting period of my life, in a very interesting period of time, and both I and time are in a very interesting position. I’ve just left OTF after a very interesting five years of supporting people who build great tools to save those most vulnerable online, and now I’ve joined Techcultivation and looking to do more of that and beyond. Not to mention great projects being set up like the SVT which I really want to tell you about. Those are all stories, from the past, the present and the future that I want to tell.

That I need to tell really.

Surviving a World in Crisis

Ron Burgundy saying "Well, that escalated Quickly"

Not gonna sugar coat it folks, since the last time I wrote a blog post, things have been rapidly becoming shittier. It was partially why I stopped. I called my older posts “almost naive” earlier, and they totally were. I’ve been disillusioned for as long as I can remember, and angry for even longer than that. I’ve also been tired. But the disillusionment, one side effect was it made me feel embarrassed by the naive fiction I used to peddle pre-2016.

I will not belabor the point today, I’ll keep that for later blog posts, but here is why I’m writing again. Was I wrong about things in the past? Yeah I was. Was I naive? Almost adorably so. Did my politics evolve since then? I hope so. Is there a danger of me spewing more naive fiction that I might be embarrassed about in the future? Well, that’s actually my plan, and it’s almost crazy enough it might work.

When times are hard, do something. If it works, do it some more. If it does not work, do something else. But keep going.

Audre Lorde

Not writing has not been working for me. Writing things that turned out to be naive worked for me at the time. Crises robbed us from our imagination. But we don’t all have the luxury or privilege of being doom preppers or nihilists. Just as the climate crisis will hit the poor, the queer and those in the larger world first, it will come for their imaginations first.

I want to write again and maybe encourage you all to start blogging again because we need to save our imagination, it’s the only way we can keep going. So expect more wonderful stories on this website, both the ones I promised above, and more, about how we’re gotta get through this and make things better.