中国邮电高校学报(英文)

• Networks • 上一篇    下一篇

Modeling in-network caching and bandwidth sharing performance in information-centric networking

王国卿1,黄韬1,刘江1,陈建亚1,刘韵洁2   

  1. 1. 北京邮电大学
    2. 北京邮电大学信息与通信工程学院
  • 收稿日期:2012-09-17 修回日期:2012-12-21 出版日期:2013-04-30 发布日期:2013-04-26
  • 通讯作者: 王国卿 E-mail:gqwang@bupt.edu.cn
  • 基金资助:

    This work was supported by the National Basic Research Program of China (2012CB315801, 2011CB302901), and the Fundamental Research Funds for the Central Universities (2011RC0118).

Modeling in-network caching and bandwidth sharing performance in information-centric networking

  • Received:2012-09-17 Revised:2012-12-21 Online:2013-04-30 Published:2013-04-26
  • Contact: Guo-Qing WANG E-mail:gqwang@bupt.edu.cn
  • Supported by:

    This work was supported by the National Basic Research Program of China (2012CB315801, 2011CB302901), and the Fundamental Research Funds for the Central Universities (2011RC0118).

摘要:

Information-centric networking (ICN) proposes a content-centric paradigm which has some attractive advantages, such as network load reduction, low dissemination latency, and energy efficiency. In this paper, based on the analytical model of ICN with receiver-driven transport protocol employing least-recently used (LRU) replacement policy, we derive expressions to compute the average content delivery time of the requests’ arrival sequence of a single cache, and then we extend the expressions to a cascade of caches’ scenario. From the expressions, we know the quantitative relationship among the delivery time, cache size and bandwidth. Our results, analyzing the trade-offs between performance and resources in ICN, can be used as a guide to design ICN and to evaluation its performance.

关键词:

ICN, LRU, miss probability, content delivery time

Abstract:

Information-centric networking (ICN) proposes a content-centric paradigm which has some attractive advantages, such as network load reduction, low dissemination latency, and energy efficiency. In this paper, based on the analytical model of ICN with receiver-driven transport protocol employing least-recently used (LRU) replacement policy, we derive expressions to compute the average content delivery time of the requests’ arrival sequence of a single cache, and then we extend the expressions to a cascade of caches’ scenario. From the expressions, we know the quantitative relationship among the delivery time, cache size and bandwidth. Our results, analyzing the trade-offs between performance and resources in ICN, can be used as a guide to design ICN and to evaluation its performance.

Key words:

ICN, LRU, miss probability, content delivery time