Tag: 硬件

Dkphhh Created@

本文译自 The Verge 9 月 18 日刊发的 How Trump’s tariffs will hurt US tech companies 原文作者 Chaim Gartenberg

由于大陆禁止普通网络媒体讨论贸易战,这篇译稿无法在煎蛋刊出,但我觉得这篇文章有其价值,所以把它收录到了这里。

贸易战的大是大非我无意争论,但它对两国的影响是实实在在的,不仅仅只是几个坚硬的数字。大公司可以借助媒体和说客将自己的意见传达出去,为自己赢得利益,但无数无法发声的小公司和普通民众却成了贸易战中无言的牺牲品。

本文例举的只是美国普通公司的艰难处境,中国大陆的情况,由于媒体被禁言,无法悉知,但也可以想象。


周一夜里,美国贸易代表(USTR)确定了针对价值 2000 亿美金的中国输美产品的新一轮关税,而此事对科技公司的影响十分严重。从下周开始,受影响的商品将在进入美国时将加征 10%的关税,在年底,这些产品的关税将达到 25%。此次征税产品不包括手机电脑等消费类电子产品,但大量电子元器件却位列清单,其中包括印刷电路板,电池组件和芯片。几乎所有和中国有贸易往来的美国公司都会在某种程度上受到影响,但对那些需要从中国进口零件的小公司来说,这场贸易战无疑是毁灭性打击。此外,不断升级的贸易战让美国的组装厂更加难受了。

最新一期的清单中,一部分颇有争议的产品类别被剔除出去——其中最引人注意的是健康跟踪类设备和语音助手类设备这些豁免关税的产品也是苹果争取来的。在接受 《 Good Morning America》的采访时,库克对即将到来的关税表示十分乐观。“我很乐观,”库克说,「贸易不是零和游戏。我觉得两国很快会解决问题,回归正常轨道。」

其他公司就没有这么轻松了。智能电视制造商 Element Electronics 表示,加税对于他们位于南卡罗来纳州的工厂而言就是一场灾难。从中国进口的电视并不在加税之列,但主板和 LCD 显示屏的价格将在来年上涨 25%。对于要在美国组装电视的公司而言,真是想都不敢想。

Element 的法律顾问 David Baer 说,他刚开始以为这次的关税政策是搞错了。「我无法相信,美国政府竟然要关闭我们南卡罗莱纳州的工厂,」Baer 在 8 月对 USTR 委员会说道。

在美国组装电脑的工厂也面临同样的问题。在马萨诸塞州和北卡罗莱纳州有工厂的戴尔表示,这次征税「对戴尔及其员工造成严重伤害,」,随着成本上升,要么给产品提价,要么亏本卖。但把工厂建在中国的公司却丝毫不受影响。

这种情况经济学家已经预见到了,对于他们来说,作为元器件的电路板和作为消费品的电脑就是两种产品。「对元器件加税肯定会伤害产品的制造商」,在 Vanderbilt 大学研究国际贸易的 Eric Bond 说,「从历史上看,面对这种情况另一国会采取对成品征收更高关税来抵消影响。」,但到目前为止,川普都没有注意到这一点,依然不对主要产品征税,而征收矿物、金属和待组装的元器件的关税。

**小企业面对关税真的很受伤。**CyberPowerPC 是一家位于加州专门生产高端定制化电脑的厂商,他们于本月早些时候向 USTR 反映,「此次关税政策将增加我们产品的价格,使我们我们无法保持竞争力,」这家公司的 CEO 写道,「在我们公司 20 年的历史里,此次拟定的 301 关税法案是我们公司所面临的最大威胁。」

John Samborski 是一家位与伊利诺伊州的 PC 制造厂的 CEO,他们的订单主要来自教育领域和政府,他对关税的批评更加严厉:「川普总统竞选时的口号是『Making China Great Again』吧。」

其他的公司则担心局势会更加复杂。据一份情报文件显示,新的关税会导致通讯设备价格上涨,放缓 5G 网络建设进程,使美国的移动网络落后于人。而对服务器零件的征税将导致云服务的成本上涨,互联网公司的成本也变相上涨,这相当于给中国的公司,如阿里巴巴,一个和谷歌亚马逊竞争的机会。

另一方面,这个法案出台时的混乱也造成了很多问题。Brilliant Home 这家公司在智能家居控制器这个领域做了三年。但川普突如其来的关税法案让他们懵了。本来他们的产品价格是$249,比一个 iPad 便宜$80。但是 25%的关税使得这个价格难以为继。最终,产品在上周上架销售,智能版$299,由于关税已经比这家公司的利润率还高,所以他们只能选择把成本转嫁到消费者身上。

苹果这样的巨头可以改变政策,但对于一家产品刚刚开卖的公司,这显然是不可能的。「巨头有资源可以扛得住贸易战,或者游说政客寻求豁免,」Brilliant 的 CEO Aaron Emigh 说,「但创业公司在这个过程中没有发言权。」

但是,这些公司就算经受住了这一轮贸易战,也可能还有还有下一轮等着他们。

阅读关于 中美贸易战对美国科技公司有什么影响?|文摘#4 的文章
Dkphhh Created@

智能机已经和 PC 一样无聊了吗?

全球智能手机出货量已经下滑了,各种统计机构的数据都在说明这个事实。2016 年全球智能手机出货总量达到 14.7 亿台,达到了历史高点,但相较于 2015 年的 14.4 亿,仅增长了 2.3%。到了2017 年,全球智能手机出货量共计 14.62 亿台,出现了历史上第一次下滑。

今年对于手机行业而言是更新换代的一年,全面屏、双摄、无线充电、AI,这些新技术成为了行业热词,但是消费者并不买账,IDC 年初预言 2018 年智能手机出货量会回升,但在 Q3 季度结束后却调低了预期,预计全球出货量会在 14.55 亿台左右。瓶颈是显而易见的,行业的集体创新并不能激发消费者的欲望。


为什么我们都不想换手机了?这事还得从 2011 年,智能手机的大爆发说起。

如果中国的移动互联网元年以微信诞生开始——2010 年,那么智能手机元年就应该是小米诞生的这一年——2011 年。这一年智能手机出货量开始爆发,全球出货量达到了 4.914 亿台,国内出货量达到 1.18 亿。此后的2012131415年,智能手机出货量都在成倍增长。这种爆炸式增长的动因在哪里?

首先是从功能机到智能机的换机潮,其次是对手机性能的需求。

换机潮好理解,6、7 年前全球 64 亿人口,中国 14 亿人,都还在用功能机。智能手机对功能机是降维打击,iPhone 4 和 iPhone 4S 已经很好的教育了市场,只不过当时的智能手机动辄四五千的价格遏制了民众的购买冲动。于是小米站出来了,1999 元,拉开了国产机 2000 档的豪强混战,再到 2013 年,红米开启千元机大战,智能手机的均价一路下滑。价格下滑,受惠的永远是消费者,于是这一波智能手机普及浪潮很快就席卷大江南北,从「年轻人的第一台智能手机」到「给爸妈买的第一台智能手机」都是这个时候卖出去的,到了 2018 年,下至 10 岁的小孩,上至 60 岁的中老年人,都有了一台智能机。

换机潮的意义是让没有智能机的用户有一台智能机,而手机的更新换代是为了有一台性能更好,更流畅的智能机。智能手机开始普及的 2011 年和 2012 年,手机的计算性能还相当孱弱,远远跟不上各种软件对资源的占用,可能手机淘宝或者微博一更新,你上个月刚买的新手机就无法顺畅的打开它们了。不过智能机那几年的硬件性能和它的出货量一样,每年也在成倍增长。iPhone 4siPhone 5,从 5 再到5s,几乎每一年苹果的 A 系列处理器性能都能翻番,安卓也一样,以小米为例,从米 2 到米 5,每年的新旗舰,性能也是翻倍的提升。

但最近两年旗舰机的性能提升幅度却出现了下滑。iPhone X 搭载的A11 芯片在性能测试软件 Geekbench 中得分 1 万左右,而今年搭载A12的 iPhone Xs,得分仅 1.1 万,表现低于预期。安卓阵营的旗舰芯片骁龙 845,相较前代的提升也只有 25%~30%的提升,爆炸式增长的时代过去了。

性能的提升幅度也符合用户对于手机性能增长的需求。大部分消费者日常使用频率最高的微信和支付宝并不需要多高的性能,打开速度慢一点无所谓,只要能用就行,哪怕是手游玩家,画质降一档,2015 年的手机跑跑王者荣誉依然压力不大。就像你家那台十年前的 PC 一样,现在开机一样可以用 Word 码码字,用浏览器看优酷爱奇艺也一点问题都没有,英雄联盟也能运行。其实,对于日常 80%的应用,一台三年前的手机都能完成。于是在 2018 年,这两个买手机的理由都不成立了——每个人都已经有一台智能手机,每个人都觉得自己手上这台手机还能「再战一年」。

而与手机出货量下滑相伴而行的,还有手机均价的抬升。1999 元过去一直是小米数字旗舰的标杆价格,但这个价格已经在 2016 年的小米 5 身上成为了过去式,虽然最低配依旧 1999 元,但在当时看来,低配版小米 5 的「阉割版」骁龙 820 处理器和 3G 运存+32G 存储组合显然不够「发烧」,更像是为了守住 1999 的价格底线而作出的举动。2017 年发布的小米 6 则彻底甩开了 1999 的桎梏,将价格抬升到了 2499,今年的小米 8 的起售价则达到了 2699。

iPhone 的价格涨幅则更加夸张。iPhone6 时代加入售价更高的 plus 款后,iPhone 的平均价格(ASP)一路攀升,到了 iPhone X,起售价达惊人的 8388 元,今年的 iPhone Xs 价格再加 300,达 8699 元,今年的 iPhone Xs Max 顶配版售价甚至达到了 12799 元。很显然,iPhone 的价格已经完全摆脱了安卓阵营的价格区间,避开了和对手的正面竞争,转而去发掘更高端的「奢侈品手机」市场了。

而大家走到这一步的原因只有一个——利润。手机厂商集体价格上涨,除了元器件的成本上升,更重要的为了在销量下滑的当下维持足自己的利润——手机行业已经走过了获取增量市场的时代,进入了争取存量市场的「下半场」。在这个资本的寒冬,没有利润就意味着没有活下去的资金,没有利润更没有投入到产品研发和技术创新中的费用。只可惜这个行业里一半以上的利润都被苹果卷走了。不过苹果也没有乱花这些钱,Siri、Touch ID、Face ID 等改变行业走向的技术突破都是拿钱砸出来的。

Update@2019/2/18:

现在回看,我的想法更加清晰了:智能手机涨价的根本原因是,现阶段,算力已经能满足消费者的基础需求,于是换机周期就会延长,在可预见的未来手机销量会停滞甚至下滑,厂商为了保证自己的利润,势必要涨价。


这几年智能手机的历史其实就像一个 PC 发展史的浓缩版。早期 PC 价格高,竞争者少,是一片「蓝海」,但戴尔打响了价格战的第一枪,迅速拉低了市场价格。早期 PC 性能孱弱,不过在摩尔定律的加持下,性能每年都能有稳步提升,但就像那句梗说的那样,「摩尔给多少性能,盖茨就用多少性能」,硬件性能和软件消耗经历了一段相互追逐的时间,这段时间里 PC 市场迅速升温,「蓝海」很快变成了「红海」。IBM 也是看准这个市场不再具有发展潜力,于是便将 PC 业务卖给了联想,从战局中抽身,这一年是 2005 年。到了 08 年左右,仅有的几个品牌就在划定的价格区间,每年依靠上游产业链提供的模具定期更新产品,PC 产业就从「红海」变成了「死海」,很难再有什么大波澜了。

一直到现在,PC 依旧是每年 9 月份更新 CPU,隔年春天更新 GPU。核心元器件全部来自上游产业链,品牌方一般把方案整合成产品再进行营销。写这一段的时候,我突然想起来罗老师几个月前那一句:「我们都是方案整合商在那装什么孙子呢?」

看来手机也走到这一步了。

今年六月份,OPPO Find X,用惊艳的「自动滑盖」全面屏惊艳市场。三个月后,传言[小米 Mix3、荣耀 Magic](https://www.ithome.com/html/android/380357.htm https://www.ithome.com/0/386/200.htm)和[联想](https://www.ithome.com/0/386/200.htm)的新机都将采用类似的设计,只不过滑盖变成了成本更低更耐用的「手动档」。 这就是残酷的现实,创新的成本太高,现在的智能手机要创新,只能靠利润充裕的一线厂商和上游产业链共同打磨新方案,然后等到方案足够成熟,才会下放到二线厂商和中低端产品线上。然后就是大家熟悉的,所有手机一张脸,只有 Logo 变一变。

所以看到华为成为国内第一、全球第三时,我一点也不不惊讶,因为华为是唯一一个能覆盖上下游产业链的厂商,手里的麒麟芯片是真正的核心技术。

再看笔记本行业,从 2008 年到 2018 年,经历了窄边框的洗礼,也有了二合一的新设计,但主流产品的设计依旧和 10 年轻差别不大。2008 年的 MacBook air 重 1.36kg,今天的主流轻薄本也没比它轻多少。更可怕的是,摩尔定律失效,性能增长逐渐遇到瓶颈,英特尔的 10nm 制程工艺 CPU 再次延期,只能在 9 月份只能拿出所谓的8 代「性能增强版」CPU给笔电厂商,依旧是打磨了 4 年的 14nm 制程工艺,相较于去年的「老款 8 代」CPU,性能提升微乎其微。而笔电厂商们也只能看菜吃饭,在 AMD 芯片的技术真正追上英特尔以前,他们也没得选。


在手机产业,苹果一直是一个「异数」。因为只有他能摆脱产业链的束缚。它有独步业界的芯片设计能力,每年苹果的 A 系列芯片都是业界同行们要与之看起的标杆。它还有推动供应商技术的能力,如果供应商的技术达不到它的要求,它可以砸钱和供应商共同研发,推动上游产业链技术升级,它还可以收购技术供应商,让它只为自己服务。订单量量和市场占有率也让他有了和供应商谈条件的资本,所以它能要求索尼提供定制的感光元件,要求对手三星提供定制的 Amoled 面板。

但苹果也有不思进取的时候。iPhone6 到 iPhone8 四代手机一张脸。以前 S 代都有的创新,如 4S 的 Siri、5S 的 Touch ID、6s 的 3D touch,在今年也不见了踪影。今年升级的只有更快的处理器,更清晰的摄像头,更绚丽的屏幕和更震撼的音响——这些都是无聊的参数。当然,参数的升级也是升级,也是提升用户体验,但这些升级都没有新 iPhone 价格升级来得直观。

就像一位用户说的:「想起 08 年的诺基亚,换个壳,主频升 20mhz,加价 1000 卖,苹果如今也走到这个行业地位了。」

或许,智能手机产业也将进入笔记本产业的「死海」状态,创新依靠巨头驱动,剩下的人靠产业链苟活。但我不希望这一切成真。因为 PC 已经够无聊了,我不想刚刚有点意思的智能手机也变的和 PC 一样无聊。

阅读关于 智能机已经和 PC 一样无聊了吗? 的文章
Anna-Sofia Lesiv Saved@

How We Built the Internet

DALL-E/Every illustration.

_The internet is like water—we take its existence for granted, but its birth was by no means pre-ordained. A constellation of inventors, organizations, and efforts all contributed to its creation. In one of her signature deep dives, Contrary writer Anna-Sofia Lesiv excavates the history of digital communication infrastructure, from the invention of the telephone to the widespread installation of fiber-optic cable and big tech’s subsidization of undersea cables. Read this chronicle to understand how the internet’s decentralized origins led to its current state as fractured spaces governed by private entities—and its implications for its future accessibility. —__Kate Lee_


The internet is a universe of its own. For one, it codifies and processes the record of our society’s activities in a shared language, a language that can be transmitted across electric signals and electromagnetic waves at light speeds.

The infrastructure that makes this scale possible is similarly astounding—a massive, global web of physical hardware, consisting of more than 5 billion kilometers of fiber-optic cable, more than 574 active and planned submarine cables that span a over 1 million kilometers in length, and a constellation of more than 5,400 satellites offering connectivity from low earth orbit (LEO).

According to recent estimates, 328 million terabytes of data are created each day*. _There are billions of smartphone devices sold every year*, _and although it’s difficult to accurately count the total number of individually connected devices, some estimates put this number between 20 and 50 billion.

“The Internet is no longer tracking the population of humans and the level of human use. The growth of the Internet is no longer bounded by human population growth, nor the number of hours in the day when humans are awake,” writes Geoff Huston, chief scientist at the nonprofit Asia Pacific Network Information Center.

But without a designated steward, the internet faces challenges for its continued maintenance—and for the accessibility it provides. These are incredibly important questions. But in order to grasp them, it’s important to understand the internet in its entirety, from its development to where we are today.

Unlock the power of algorithmic trading with Composer. Our platform is designed for both novices and seasoned traders, offering tools to create automatic trading strategies without any coding knowledge.

With more than $1 billion in trading volume already, Composer is the platform of choice. Create your own trading strategies using AI or choose from over 1,000 community-shared strategies. Test their performance and invest with a single click.

The theory of information

In the analog era, every type of data had a designated medium. Text was transmitted via paper. Images were transmitted via canvas or photographs. Speech was communicated via sound waves.

A major breakthrough occurred when Alexander Graham Bell invented the telephone in 1876. Sound waves that were created on one end of the phone line were converted into electrical frequencies, which were then carried through a wire. At the other end, those same frequencies were reproduced as sound once again. Speech could now transcend physical proximity.

Unfortunately, while this system extended the range of conversations, it still suffered from the same drawbacks as conversations held in direct physical proximity. Just as background noise makes it harder to hear someone speak, electrical interference in the transfer line would introduce noise and scramble the message coming across the wire. Once noise was introduced, there was no real way to remove it and restore the original message. Even repeaters, which amplified signals, had the adverse effect of amplifying the noise from the interference. Over enough distance, the original message could become incomprehensible.

Still, the phone companies tried to make it work. The first transcontinental line was established in 1914, connecting customers between San Francisco and New York. It comprised 3,400 miles of wire hung from 130,000 poles.

In those days, the biggest telephone provider was the American Telephone and Telegraph Company (AT&T), which had absorbed the Bell Telephone Company in 1899. As long-distance communications exploded across the United States, Bell Labs, an internal research department of electrical engineers and mathematicians, started to think about expanding the network’s capacity. One of these engineers was Claude Shannon.

In 1941, Shannon arrived at Bell Labs from MIT, where the ideas behind the computer revolution were in their infancy. He studied under Norbert Wiener, the father of cybernetics, and worked on Vannevar Bush’s differential analyzer, a type of mechanical computer that could resolve differential equations by using arbitrarily designed circuits to produce specific calculations.

Source: __Computer History Museum_._

It was Shannon’s experience with the differential analyzer that inspired the idea for his master’s thesis. In 1937, he submitted “A Symbolic Analysis of Relay and Switching Circuits.” It was a breakthrough paper that pointed out that boolean algebra could be represented physically in electrical circuits. The beautiful thing about these boolean operators is that they require only two inputs—on and off.

It was an elegant way of standardizing the design of computer logic. And, if the computer’s operations could be standardized, perhaps the inputs the computer operated on could be standardized too.

When Shannon began working at Bell Labs during the Second World War, in part to study cryptographic communications as part of the American war effort, there was no clear definition of information. “Information” was a synonym for meaning or significance, its essence was largely ephemeral. As Shannon studied the structures of messages and language systems, he realized that there was a mathematical structure that underlied _information. _This meant that information could, in fact, be quantified. But to do so, information would need a unit of measurement.

Shannon coined the term “bit” to represent the smallest singular unit of information. This framework of quantification translated easily to the electronic signals in a digital computer, which could only be in one of two states—on or off. Shannon published these insights in his 1948 paper, “A Mathematical Theory of Communication,” just one year after the invention of the transistor by his colleagues at Bell Labs.

The paper didn’t simply discuss information encoding. It also created a mathematical framework to categorize the entire communication process in this way. For instance, Shannon noted that all information traveling from a sender to a recipient must pass through a channel, whether that channel be a wire or the atmosphere.

Shannon’s transformative insight was that every channel has a threshold—a maximum amount of information that can be delivered reliably to a sender. As long as the quantity of information carried through the channel fell below the threshold, it could be delivered to the sender intact, even if noise had scrambled some of the message during transmission. He used mathematics to prove that any message could be error-corrected into its original state if it traveled through a large-enough channel.

The enormity of this revolution is difficult to communicate today, mainly because we’re swimming in its consequences. Shannon’s theory implied that text, images, films, and even genetic material could be translated into his informational language of bits. It laid out the rules by which machines could talk to one another—about anything.

At the time that Shannon developed his theory, computers could not yet _communicate _with one another. If you wanted to transfer information from one computer to the other, you would have to physically walk over to the other computer and manually input the data yourself. However, talking machines were now an emerging possibility. And Shannon had just written the handbook for how to start building it.

Switching to packets

The telephone system was the only interconnected network by the mid-20th century. AT&T was the largest telephone network at the time. It had a monstrous continental web with hanging copper wires criss-crossing across the continent.

The telephone network worked primarily through circuit switching. Every pair of callers would get a dedicated “line” for the duration of their conversation. When it ended, an operator would reassign that line to connect other pairs of callers, and so on.

At the time, it was possible to get computers “on the network” by converting their digital signals into analog signals, and sending the analog signals through the telephone lines. But reserving an entire line for a single computer-to-computer interaction was seen as hugely wasteful.

Leonard Kleinrock, a student of Shannon’s at MIT, began to explore the design for a digital communications network—one that could transmit digital bits instead of analog sound waves.

His solution, which he wrote up as his graduate dissertation, was a packet-switching system that involved breaking up digital messages into a series of smaller pieces known as packets. Packet switching shared resources among connected computers. Rather than having a single computer’s long communiqué take up an entire line, that line could instead be shared among several users’ packets. This design allowed more messages to get to their destinations more efficiently.

For this scheme to work, there would need to be a network mechanism responsible for granting access to different packets very quickly. To prevent bottlenecks, this mechanism would need to know how to calculate the most efficient, opportunistic path to take a packet to its destination. And this mechanism couldn’t be a central point in the system that could get stuck with traffic—it would need to be a distributed mechanism that worked at each node in the network.

Kleinrock approached AT&T and asked if the company would be interested in implementing such a system. AT&T rejected his proposal—most demand was still in analog communications. Instead, they told him to use the regular phone lines to send his digital communications—but that made no economic sense.

“It takes you 35 seconds to dial up a call. You charge me for a minimum of three minutes, and I want to send a hundredth-of-a-second of data,” Kleinrock said.

It would take the U.S. government to resolve this impasse and command such a network into existence. In the late 1960s, shaken by the Soviet Union’s success in launching Sputnik into orbit, the U.S. Department of Defense began investing heavily in new research and development. It created ARPA, the Advanced Research Projects Agency, which funded various research labs across the country.

Robert Taylor, who was tasked with monitoring the programs’ progress from the Pentagon, had set up three separate Teletype terminals for each of the ARPA-funded programs. At a time when computers cost anywhere from $500,000 to several million dollars, three computers sitting side-by-side seemed like a tremendous waste of money.

“Once you saw that there were these three different terminals to these three distinct places the obvious question that would come to anyone’s mind [was]: why don’t we just have a network such that we have one terminal and we can go anywhere we want?” Taylor asked.

This was the perfect application for packet switching. Taylor, familiar with Kleinrock’s work, commissioned an electronics company to build the types of packet switchers Kleinrock had envisioned. These packet switchers were known as interface message processors (IMPs). The first two IMPs were connected to mainframes at UCLA and Stanford Research Institute (SRI), using the telephone service between them as the communications backbone. On October 29, 1969, the first message between UCLA and SRI was sent. ARPANET was born.

ARPANET grew rapidly. By 1973, there were 40 computers connected to IMPs across the country. As the network grew faster, it became clear that a more robust packet-switching protocol would need to be developed. ARPANET’s protocol had a few properties that prevented it from scaling easily. It struggled to deal with packets arriving out of order, didn’t have a great way to prioritize them, and lacked an optimized system to deal with computer addresses.

Source: __Computer History Museum_._

By 1974, researchers Vinton Cerf and Robert Khan came out with “A Protocol for Packet Network Intercommunication.” They outlined the ideas that would eventually become Transmission Control Protocol (TCP) and Internet Protocol (IP)—the two fundamental standards of the internet today. The core idea that enabled both was a “datagram,” which wrapped the packets in a little envelope. That envelope would act as a little header at the front of each packet that would include the address it was going to, along with other helpful bits of info.

In Cerf and Khan’s conception, the TCP would run on the end-nodes of the network—meaning that it wouldn’t run on the routers and obstruct traffic, but instead on users’ computers. The TCP would do everything from breaking messages into packets, placing the packets into datagrams, ordering the packets correctly at the receiver’s end, and performing error correction.

Packets would then be routed via IP through the network, which ran on all the packet-directing routers. IP only looked at the destination of the packet, while remaining entirely blind to the contents it was transmitting, enabling both speed and privacy.

These protocols were trialed on a number of nodes within the ARPANET, and the standards for TCP and IP were officially published in 1981. What was exceedingly clever about this suite of protocols was its generality. TCP and IP did not care which carrier technology transmitted its packets, whether it be copper wire, fiber-optic cable, or radio. And they imposed no constraints on what the bits could be formatted into—video text, simple messages, or even web pages formatted in a browser.

_Source: __David D. Clark,_ Designing an Internet.

This gave the system a lot of freedom and potential. Every use case could be built and distributed to any machine with an IP address in the network. Even then, it was difficult to foresee just how massive the internet would one day become.

David Clark, one of the architects of the original internet, wrote in 1978 that “we should … prepare for the day when there are more than 256 networks in the Internet.” He now looks upon that comment with some humor. Many assumptions about the nature of computer networking have changed since then, primarily the explosion in the number of personal computers. Today, billions of individual devices are connected across hundreds of thousands of smaller networks. Remarkably, they all still do so using IP.

Although ARPANET was decommissioned in 1986, the rest of the connected computers kept going. Residences with personal computers used dial-up to get email access. After 1989, a new virtual knowledge base was invented with the World Wide Web.

With the advent of the web, new infrastructure, consisting of web servers, emerged to ensure the web was always available to users, and programs like web browsers allowed end nodes to view the information and web pages stored in the servers.

Source: __Our World in Data_._

As the number of connected people increased in hockey-stick fashion, carriers finally began realizing that dial-up—converting digital to analog signals—was not going to cut it anymore. They would need to rebuild the physical connectivity layer by making it digital-first.

The single biggest development that would enable this and alter the internet forever was the mass installment of fiber-optic cable throughout the 1990s. Fiber optics use photons traveling through thin glass to increase the speed of information flow. The fastest connection possible with copper wire was about 45 million bits per second (mbps). Fiber optics made that connection more than 2,000 times faster. Today, residences can hook into a fiber-optic connection that can deliver them 100 billion bits per second (gbps).

Fiber was initially laid down by telecom companies offering high-quality cable television service to homes. The same lines would be used to provide internet access to these households. However, these service speeds were so fast that a whole new category of behavior became possible online. Information moved fast enough to make applications like video calling or video streaming a reality.

The connection was so good that video would no longer have to go through the cable company’s digital link to your television. It could be transmitted through those same IP packets and viewed with the same experience on your computer.

YouTube debuted in 2004 and Netflix began streaming in 2007. The data consumption of American households skyrocketed. Streaming a film or a movie requires about 1 to 3 gigabytes of data per hour. In 2013, the median household consumed 20-60 gigabytes of data per month. Today, that number falls somewhere about 587 gigabytes.

And while it may have been the government and small research groups that kickstarted the birth of the internet, its evolution henceforth was dictated by market forces, including service providers that offered cheaper-than-ever communication channels and users that primarily wanted to use those channels for entertainment.

A new kind of internet emerges

If the internet imagined by Cerf and Kahn was a distributed network of routers and endpoints that shared data in a peer-to-peer fashion, the internet of our day is a wildly different beast.

The biggest reason for this is that the internet today is not primarily used for back-and-forth networking and communications—the vast majority of users treat it as a high-speed channel for content delivery.

In 2022, video streaming comprised nearly 58 percent of all Internet traffic. Netflix and YouTube alone accounted for 15 and 11 percent, respectively.

This even shows up in internet service provision statistics. Far more capacity is granted for downlink to end nodes than for uplink—meaning there is more capacity to provide information to end-user nodes than to send data through networks. Typical cable speeds for downlink might reach over 1,000 mbps, but only about 35 mbps are granted for uplink. It’s not really a two-way street anymore.

Even though the downlink speeds enabled by fiber were blazingly fast, the laws of physics still imposed some harsh realities for global internet companies with servers headquartered in the United States. The image below shows the “round-trip time” for various global users to connect to Facebook in 2011.

Source: __Geoff Huston_._

At the time, Facebook users in Asia or Africa had a completely different experience to their counterparts in the U.S. Their connection to a Facebook server had to travel halfway around the world, while users in the U.S. or Canada could enjoy nearly instantaneous service. To combat this, larger companies like Google, Facebook, Netflix, and others began storing their content physically closer to users through CDNs, or “content delivery networks.”

These hubs would store caches of the websites’ data so that global users wouldn’t need to ping Facebook’s main servers—they could merely interact with the CDNs. The largest companies realized that they could go even further. If their client base was global, they had an economic incentive to build a global service infrastructure. Instead of simply owning the CDNs that host your data, why not own the literal fiber cable that connects servers from the United States to the rest of the world?

In the 2020s, the largest internet companies have done just that. Most of the world’s submarine cable capacity is now either partially or entirely owned by a FAANG company—meaning Facebook (Meta), Amazon, Apple, Netflix, or Google (Alphabet). Below is a map of some of the sub-sea cables that Facebook has played a part in financing.

Source: __Telegeography_._

These cable systems are increasingly impressive. Google, which owns a number of sub-sea cables across the Atlantic and Pacific, can deliver hundreds of terabits per second through its infrastructure.

In other words, these applications have become so popular that they have had to leave traditional internet infrastructure and operate their services within their own private networks. These networks not only handle the physical layer, but also create new transfer protocols —totally disconnected from IP or TCP. Data is transferred on their own private protocols, essentially creating digital fiefdoms.

This verticalization around an enclosed network has offered a number of benefits for such companies. If IP poses security risks that are inconvenient for these companies to deal with, they can just stop using IP. If the nature by which TCP delivers data to the end-nodes is not efficient enough for the company’s purposes, they can create their own protocols to do it better.

On the other hand, the fracturing of the internet from a common digital space to a tapestry of private networks raises important questions about its future as a public good.

For instance, as provision becomes more privatized, it is difficult to answer whose shoulders the responsibility of providing access to the internet as a “human right,” as the U.N. describes, will fall on.

And even though the internet has become the de facto record of recent society’s activities, there is no one with the dedicated role of helping maintain and preserve these records. Already, the problem known as link rot is beginning to affect everyone from the Harvard Law Review, where, according to Jonathan Zittrain, three quarters of all links cited no longer function. This occurs even at The New York Times, where roughly half of all articles contain at least one rotted link.

The consolation is that the story of the internet is nowhere near over. It is a dynamic and constantly evolving structure. Just as high-speed fiber optics reshaped how we use the internet, forthcoming technologies may have a similarly transformative effect on the structure of our networks.

SpaceX’s Starlink is already unlocking a completely new way of providing service to millions. Its data packets, which travel to users via radio waves from low earth orbit, may soon be one of the fastest and most economical ways of delivering internet access to a majority of users on Earth. After all, the distance from LEO to the surface of the Earth is just a fraction of the length of subsea cables across the Atlantic and Pacific oceans. Astranis, another satellite internet service provider that parks its small sats in geostationary orbit, may deliver a similarly game-changing service for many. Internet from space may one day become a kind of common global provider. We will need to wait and see what kind of opportunities a sea change like this may unlock.

Still, it is undeniable that what was once a unified network has, over time, fractured into smaller spaces, governed independently of the whole. If the initial problems of networking involved the feasibility of digital communications, present and future considerations will center on the social aspects of a network that is provided by private entities, used by private entities, but relied on by the public.


Anna-Sofia Lesiv is a writer at venture capital firm __Contrary_, where she originally published __this piece_. She graduated from Stanford with a degree in economics and has worked at Bridgewater, Founders Fund, and 8VC.

阅读关于 How We Built the Internet 的文章