3 min read



  • Web 前端是由 PHP 写的。Facebook 的 HipHop [1] 会把PHP转成 C++ 并用 g++编译,这样就可以为模板和Web逻贺业务层提供高的性能。
  • 业务逻辑以Service的形式存在,其使用Thrift [2]。这些Service根据需求的不同由PHP,C++或Java实现(也可以用到了其它的一些语言……)
  • 用Java写的Services没有用到任何一个企业级的应用服务器,但用到了Facebook自己的定制的应用服务器。看上去好像是重新发明轮子,但是这些Services只被暴露给Thrift使用(绝大所数是这样),Tomcat太重量级了,即使是Jetty也可能太过了点,其附加值对Facebook所需要的没有意义。
  • 持久化由MySQL, Memcached [3], Facebook 的 Cassandra [4], Hadoop 的 HBase [5] 完成。Memcached 使用了MySQL的内存Cache。Facebook 工程师承认他们的Cassandra 使用正在减少,因为他们更喜欢HBase,因为它的更简单的一致性模型,以到其MapReduce能力。
  • 离线处理使用Hadoop 和 Hive。
  • 日志,点击,feeds数据使用Scribe [6],把其聚合并存在 HDFS,其使用Scribe-HDFS [7],因而允许使用MapReduce进行扩展分析。
  • BigPipe [8] 是他们的定制技术,用来加速页面显示。
  • Varnish Cache [9]用作HTTP代理。他们用这个的原因是高速和有效率。 [10].
  • 用来搞定用户上传的十亿张照片的存储,其由Haystack处理,Facebook自己开发了一个Ad-Hoc存储方案,其主要做了一些低层优化和“仅追加”写技术 [11].
  • Facebook Messages 使用了自己的架构,其明显地构建在了一个动态集群的基础架构上。业务逻辑和持久化被封装在一个所谓的’Cell’。每个‘Cell’都处理一部分用户,新的‘Cell’可以因为访问热度被添加[12]。 持久化归档使用HBase [13]。
  • Facebook Messages 的搜索引擎由存储在HBase中的一个倒置索引的构建。 [14]
  • Facebook 搜索引擎实现细节据我所知目前是未知状态。
  • Typeahead 搜索使用了一个定制的存储和检索逻辑。 [15]
  • Chat 基于一个Epoll 服务器,这个服务器由Erlang 开发,由Thrift存取 [16]


  • Facebook估计有超过60,000 台服务器[16]。他们最新的数据中心在俄勒冈州的Prineville,其基于完全自定设计的硬件[17] 那是最近才公开的 Open Compute 项目[18]。
  • 300 TB 的数据存在 Memcached 中处理 [19]
  • 他们的Hadoop 和 Hive 集群由3000 服务器组成,每台服务器有8个核,32GB的内存,12TB的硬盘,全部有2万4千个CPU的核,96TB内存和36PB的硬盘。 [20]
  • 每天有1000亿的点击量,500亿张照片, 3 万亿个对象被 Cache,每天130TB的日志(2010年7月的数据) [21]

[1] HipHop for PHPhttp://developers.facebook.com/blog/post/358
[2] Thrifthttp://thrift.apache.org/
[3] Memcachedhttp://memcached.org/
[4] Cassandrahttp://cassandra.apache.org/
[5] HBasehttp://hbase.apache.org/
[6] Scribehttps://github.com/facebook/scribe
[7] Scribe-HDFShttp://hadoopblog.blogspot.com/2009/06/hdfs-scribe-integration.html
[8] BigPipehttp://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919
[9] Varnish Cachehttp://www.varnish-cache.org/
[10] Facebook goes for Varnishhttp://www.varnish-software.com/customers/facebook
[11] Needle in a haystack: efficient storage of billions of photos: http://www.facebook.com/note.php?note_id=76191543919
[12] Scaling the Messages Application Back Endhttp://www.facebook.com/note.php?note_id=10150148835363920
[13] The Underlying Technology of Messageshttps://www.facebook.com/note.php?note_id=454991608919
[14] The Underlying Technology of Messages Tech Talkhttp://www.facebook.com/video/video.php?v=690851516105
[15] Facebook’s typeahead search architecturehttp://www.facebook.com/video/video.php?v=432864835468
[16] Facebook Chathttp://www.facebook.com/note.php?note_id=14218138919
[17] Who has the most Web Servers?http://www.datacenterknowledge.com/archives/2009/05/14/whos-got-the-most-web-servers/
[18] Building Efficient Data Centers with the Open Compute Projecthttp://www.facebook.com/note.php?note_id=10150144039563920
[19] Open Compute Projecthttp://opencompute.org/
[20] Facebook’s architecture presentation at Devoxx 2010http://www.devoxx.com
[21] Scaling Facebook to 500 millions users and beyondhttp://www.facebook.com/note.php?note_id=409881258919


What is Facebook’s architecture?

From various readings and conversations I had, my understanding of Facebook’s current architecture is:

  • Web front-end written in PHP. Facebook’s HipHop [1] then converts it to C++ and compiles it using g++, thus providing a high performance templating and Web logic execution layer
  • Business logic is exposed as services using Thrift [2]. Some of these services are implemented in PHP, C++ or Java depending on service requirements (some other languages are probably used…)
  • Services implemented in Java don’t use any usual enterprise application server but rather use Facebook’s custom application server. At first this can look as wheel reinvented but as these services are exposed and consumed only (or mostly) using Thrift, the overhead of Tomcat, or even Jetty, was probably too high with no significant added value for their need.
  • Persistence is done using MySQL, Memcached [3], Facebook’s Cassandra [4], Hadoop’s HBase [5]. Memcached is used as a cache for MySQL as well as a general purpose cache. Facebook engineers admit that their use of Cassandra is currently decreasing as they now prefer HBase for its simpler consistency model and its MapReduce ability.
  • Offline processing is done using Hadoop and Hive
  • Data such as logging, clicks and feeds transit using Scribe [6] and are aggregating and stored in HDFS using Scribe-HDFS [7], thus allowing extended analysis using MapReduce
  • BigPipe [8] is their custom technology to accelerate page rendering using a pipelining logic
  • Varnish Cache [9] is used for HTTP proxying. They’ve prefered it for its high performance and efficiency [10].
  • The storage of the billions of photos posted by the users is handled by Haystack, an ad-hoc storage solution developed by Facebook which brings low level optimizations and append-only writes [11].
  • Facebook Messages is using its own architecture which is notably based on infrastructure sharding and dynamic cluster management. Business logic and persistence is encapsulated in so-called ‘Cell’. Each Cell handles a part of users ; new Cells can be added as popularity grows [12]. Persistence is achieved using HBase [13].
  • Facebook Messages’ search engine is built with an inverted index stored in HBase [14]
  • Facebook Search Engine’s implementation details are unknown as far as I know
  • The typeahead search uses a custom storage and retrieval logic [15]
  • Chat is based on an Epoll server developed in Erlang and accessed using Thrift [16]

About the resources provisioned for each of these components, some information and numbers are known:

  • Facebook is estimated to own more than 60,000 servers [17]. Their recent datacenter in Prineville, Oregon is based on entirely self-designed hardware [18] that was recently unveiled as Open Compute Project [19].
  • 300 TB of data is stored in Memcached processes [20]
  • Their Hadoop and Hive cluster is made of 3000 servers with 8 cores, 32 GB RAM, 12 TB disks that is a total of 24k cores, 96 TB RAM and 36 PB disks [20]
  • 100 billion hits per day, 50 billion photos, 3 trillion objects cached, 130 TB of logs per day as of july 2010 [21]

[1] HipHop for PHP: http://developers.facebook.com/b…
[2] Thrift: http://thrift.apache.org/
[3] Memcached: http://memcached.org/
[4] Cassandra: http://cassandra.apache.org/
[5] HBase: http://hbase.apache.org/
[6] Scribe: https://github.com/facebook/scribe
[7] Scribe-HDFS: http://hadoopblog.blogspot.com/2…
[8] BigPipe: http://www.facebook.com/notes/fa…
[9] Varnish Cache: http://www.varnish-cache.org/
[10] Facebook goes for Varnish: http://www.varnish-software.com/…
[11] Needle in a haystack: efficient storage of billions of photos: http://www.facebook.com/note.php…
[12] Scaling the Messages Application Back End: http://www.facebook.com/note.php…
[13] The Underlying Technology of Messages: https://www.facebook.com/note.ph…
[14] The Underlying Technology of Messages Tech Talk: http://www.facebook.com/video/vi…
[15] Facebook’s typeahead search architecture: http://www.facebook.com/video/vi…
[16] Facebook Chat: http://www.facebook.com/note.php…
[17] Who has the most Web Servers?: http://www.datacenterknowledge.c…
[18] Building Efficient Data Centers with the Open Compute Projecthttp://www.facebook.com/note.php…
[19] Open Compute Projecthttp://opencompute.org/
[20] Facebook’s architecture presentation at Devoxx 2010: http://www.devoxx.com
[21] Scaling Facebook to 500 millions users and beyond: http://www.facebook.com/note.php…

微信支付标点符 wechat qrcode
支付宝标点符 alipay qrcode

Anaconda 修改源镜像流程

今天在使用conda安装包时,发现老是下载到一半报HTTP错误,中断了下载。且下载速度非常的慢: PS C:\
1 min read


由于服务器最近几天被入侵,在分析日志的时候发现很多盗链请求。所以顺手给Nginx加了个防盗链配置。 配置内容:
17 sec read


11 sec read


电子邮件地址不会被公开。 必填项已用*标注