Re: [Brahms-dev-l] question about cvs

From: Christian Holm Christensen <cholm@nbi.dk>
Date: Sat Apr 15 2006 - 05:23:56 EDT
Hi Hiro,

On Fri, 2006-04-14 at 22:47 -0400, Hironori Ito wrote:
> Hello.  What time do you go to sleep?  =-O  

Way too late. 

> 
> Christian Holm Christensen wrote:
...
> >If you go for this option, I think the files should be stored in a way
> >that's accessible to all - that is, store the files in an (x)rootd
> >server.  If BRAHMS is not running an (x)rootd server accessible world
> >wide, it's quite easy to set up one yourself.  coupled with a file
> >catalogue, and you have a half GRID solution.
> >
> >  
> >
> We are running rootd on every nodes.  They are  not visible from the 
> outside world.  In fact, our filecatalog already have entry to local 
> disk space of each nodes (to be used by rootd).  Therefore, we actually 
> have distributed disk capability.  We just don't need/use them(???) 
> since we tidy up our disk spaces cautiously.  ;-)

The idea of a file-catalog would of course be to store a full URL
(rootd://rcas0001.rcf.bnl.gov/brahms/data07/foo/bar/baz/data.root) key'd
by some sort of logical identifier (foo/bar/baz/data.root), and the user
then queries the file-catalog with the key, and get back the real URL. 

Of course, if the host is not publicly accessible (private sub-net, or
the like), there might be a problem.  However, I think one can set up
xrootd with a `master' server that pulls from private servers. 

> Copying to HPSS is important particulary for local disks since they are 
> never backed up (and we lose them quick often due to bad disks by 
> excessive heat in the room.)   Just like filecatalog, you can have files 
> in HPSS and local or NFS disks.

Right.  Backing up is fine.  But HPSS is slow, so using that for
production storage is probably a bad idea.  

However, CERN has developed software that can pull in data from HPSS,
and ROOT can interface that directly.   The software is CASTOR, and the
ROOT interface is TCastorFile. 

> >Very _very_ slow.  Encoding and decoding, database connections, and so
> >on.  
> >
> >  
> >
> >>And, the response of the database might be too slow with larger 
> >>histograms for someone's taste???
> >>    
> >>
> >
> >:-)
> >
> >  
> >
> Since I have never stored serialized histograms in blob in MySQL, I have 
> no idea how slow it is.  

Nor have I, but just think about how long it takes to get a single
parameter from the database (and please consider non-local
connections). 

> But, the slowness might depends on the person.  
> For example, I think that Cross Bronx highway leading upto George 
> Washington bridge in NY city is a giant parking lot.  But, New Yorkers 
> are not quite eager to fix this "Highway".   :'(

LOL.

In a sense, it's `slow' if a faster alternative exists. 

> If "blob" is bad, do you have any experince with OO database.  

ROOT :-)

(Not Objectivity - it has a lot of over-head, and isn't particularly
fast). 
 
> That 
> might be faster???

I think ROOT is actually quite fast at servering files over the
network. 

In ALICE, we store _everything_ (and I do mean _everything_) in ROOT
files.  The file catalogue then gives the URLs, and the GRID middle-ware
pulls in the data from the remote servers. 

Yours,

-- 
 ___  |  Christian Holm Christensen 
  |_| |  -------------------------------------------------------------
    | |  Address: Sankt Hansgade 23, 1. th.  Phone:  (+45) 35 35 96 91
     _|           DK-2200 Copenhagen N       Cell:   (+45) 24 61 85 91
    _|            Denmark                    Office: (+45) 353  25 404
 ____|   Email:   cholm@nbi.dk               Web:    www.nbi.dk/~cholm
 | |

_______________________________________________
Brahms-dev-l mailing list
Brahms-dev-l@lists.bnl.gov
http://lists.bnl.gov/mailman/listinfo/brahms-dev-l
Received on Sat Apr 15 05:24:33 2006

This archive was generated by hypermail 2.1.8 : Sat Apr 15 2006 - 05:24:46 EDT