中国科学院软件研究所机构知识库
Advanced  
ISCAS OpenIR  > 软件所图书馆  > 会议论文
Title:
Hippo: An enhancement of pipeline-aware in-memory caching for HDFS
Author: Wei, Lan (1) ; Lian, Wenbo (1) ; Liu, Kuien (1) ; Wang, Yongji (1)
Conference Name: 2014 23rd International Conference on Computer Communication and Networks, ICCCN 2014
Conference Date: August 4, 2014 - August 7, 2014
Issued Date: 2014
Conference Place: Shanghai, China
Corresponding Author: Wei, Lan
Publish Place: Institute of Electrical and Electronics Engineers Inc.
Indexed Type: EI
ISSN: 10952055
ISBN: 9781479935727
Department: (1) Institute of Software, Chinese Academy of Sciences, China
Abstract: In the age of big data, distributed computing frameworks tend to coexist and collaborate in pipeline using one scheduler. While a variety of techniques for reducing I/O latency have been proposed, these are rarely specific for the whole pipeline performance. This paper proposes memory management logic called 'Hippo' which targets distributed systems and in particular 'pipelined' applications that might span differing big data frameworks. Though individual frameworks may have internal memory management primitives, Hippo proposes to make a generic framework that works agnostic of these highlevel operations. To increase the hit ratio of in-memory cache, this paper discusses the granularity of caching and how Hippo leverages the job dependency graph to make memory retention and pre-fetching decisions. Our evaluations demonstrate that job dependency is essential to improve the cache performance and a global cache policy maker, in most cases, significantly outperforms explicit caching by users.
English Abstract: In the age of big data, distributed computing frameworks tend to coexist and collaborate in pipeline using one scheduler. While a variety of techniques for reducing I/O latency have been proposed, these are rarely specific for the whole pipeline performance. This paper proposes memory management logic called 'Hippo' which targets distributed systems and in particular 'pipelined' applications that might span differing big data frameworks. Though individual frameworks may have internal memory management primitives, Hippo proposes to make a generic framework that works agnostic of these highlevel operations. To increase the hit ratio of in-memory cache, this paper discusses the granularity of caching and how Hippo leverages the job dependency graph to make memory retention and pre-fetching decisions. Our evaluations demonstrate that job dependency is essential to improve the cache performance and a global cache policy maker, in most cases, significantly outperforms explicit caching by users.
Language: 英语
Content Type: 会议论文
URI: http://ir.iscas.ac.cn/handle/311060/16610
Appears in Collections:软件所图书馆_会议论文

Files in This Item:

There are no files associated with this item.


Recommended Citation:
Wei, Lan ,Lian, Wenbo ,Liu, Kuien ,et al. Hippo: An enhancement of pipeline-aware in-memory caching for HDFS[C]. 见:2014 23rd International Conference on Computer Communication and Networks, ICCCN 2014. Shanghai, China. August 4, 2014 - August 7, 2014.
Service
Recommend this item
Sava as my favorate item
Show this item's statistics
Export Endnote File
Google Scholar
Similar articles in Google Scholar
[Wei, Lan (1)]'s Articles
[Lian, Wenbo (1)]'s Articles
[Liu, Kuien (1)]'s Articles
CSDL cross search
Similar articles in CSDL Cross Search
[Wei, Lan (1)]‘s Articles
[Lian, Wenbo (1)]‘s Articles
[Liu, Kuien (1)]‘s Articles
Related Copyright Policies
Null
Social Bookmarking
Add to CiteULike Add to Connotea Add to Del.icio.us Add to Digg Add to Reddit
所有评论 (0)
暂无评论
 
评注功能仅针对注册用户开放,请您登录
您对该条目有什么异议,请填写以下表单,管理员会尽快联系您。
内 容:
Email:  *
单位:
验证码:   刷新
您在IR的使用过程中有什么好的想法或者建议可以反馈给我们。
标 题:
 *
内 容:
Email:  *
验证码:   刷新

Items in IR are protected by copyright, with all rights reserved, unless otherwise indicated.

 

 

Valid XHTML 1.0!
Copyright © 2007-2020  中国科学院软件研究所 - Feedback
Powered by CSpace