wiki:HetProcedures/Datatransfer

Implementation of HET Data Transfer

All instruments' acquired data is being transferred in a few steps in a quasi-real time fashion.

First the information about new data added in data transfer database (DT DB). This database is a source for DT software. V/L and HPF are using different software and approaches in handling data so the DT solutions are varies.

In case of V/L as soon as data saved on ramdisk instruments software fire tcs proxy event to update DT DB. For HPF, the script which updates NR for new observations also updates DT DB.

Next step is to transfer or copy data to HET NFS storage. The logic here is different for V/L and HPF.

Transfer to NFS

It is implemented via two separate services. One for HPF and another for V/L. Future HRS autonomous service is also anticipated.

  • VIRUS/LRS2. Data moved from ramdisk to HET NFS storage. For VIRUS it also runs VHC for various data checks.
  • HPF. Data are kept on the HPF computer that is connected to the HET local network. Data copying to NFS in form of TAR archives. It's also creates archives of images from ACQ, Guider1, Guider2, WFS1, WFS2, HPFACQ and DIMM cameras that were captured right before, after, or during an exposure for science and engineering observations.

And final step

Transfer to TACC

The dedicated service copies any new data found on NFS to TACC. The system operates separately for each instrument. It's possible to adjust the priority of transfer of an instrument data by changing different timeouts which are using in DT software.

The logic of data transfer for NFS and TACC are the same. Transfer if

  • There are newer observations (next observation number)
  • Acquired data are older then some timeout which is instrument specific.

To get the HPF morning cals to TACC as fast as possible we can add an extra LFC cal at the very end of the set.

Last modified 20 months ago Last modified on Sep 11, 2022 8:24:25 AM