= HPF = You can find the HPF team's website with their manual at [http://psuastro.github.io/HPF/Operators-Manual/] and another useful site is their [http://psuastro.github.io/HPF/Exposure-Times/ exposure time calculator] * [#setup Setting up Screens and rasession] * [#logs HPF Screens logs] * [#obs Science observations] * [#abort Aborting observations] * [#ql Quick-look data and SNR] * [#sat Monitoring HPF exposures for saturation] * [#twi HPF evening twilights] * [#darks How to take Dark exposures] * [#cals HPF Calibrations] * [#nolfc Observing if LFC is down] * [#troubleshooting Trouble shooting] * [#timshet When TCS communications is lost or TIMSHET problems arise] * [#det Detector client restarts] * [#calcon Calibration client connection] * [#expose How to stop the client (if stuck exposing)] == HPF Observing == #setup === HPF Observing === #rasession Easiest method to start up RA screen session is to - {{{ssh -X hpf@hpf}}} ( and make the window pretty large ) - run {{{rasession}}} in the SSH session This will bring up the usual four-terminal screen session that the RAs have used. If you find rasession dead or in unusable state here is the fix: - {{{ssh hpf@hpf}}} - run command {{{screen -X -S RA quit}}} - run command {{{rasession}}} Further rasession troubleshooting tips [wiki:RA/hpf#rasession can be found here]. In the rare event that the "Scripts" screen sessions is not running (or you accidentally "exit"ed out it), you can create it with the following command: {{{(HPFpython) [hpf@hpf-server2 ~]$ scripts_session}}} '''Most important note:''' never run {{{exit}}} or press ctrl-C/ctrl-D in these screens as it is likely to crash the HPF instrument. Disconnect from this by closing the xterm/gnome-terminal window externally - do not type "exit" or anything like that, ever. [[br]] The layout is shown below along with the commands used to connect to each screen (note top left is not connected to any screen session). More here about [wiki:HetProcedures/RA/Screens HPF Screens] {{{ |----------------------|---------------------| | - HPF terminal - | - HET TIMS - | | ssh -X hpf@hpf | ssh -X hpf@hpf | | | screen -x TIMS | |--------------------------------------------| | - HPF Client - | - HPF Scipts - | | ssh -X hpf@hpf |ssh -X hpf@hpf | | screen -x hpf |screen -x scripts | |----------------------|---------------------| }}} If you have a problem where the screen looks badly sized, like this: [[Image(hpf_screen_size.jpg, 300px)]] you can press {{{ctrl-a F}}} to "fit" the screen to the current window. (note that is a capital F, so the sequence is Ctrl-a Shift-F) For more details about weird Screens behavior, see [wiki:HetProcedures/RA/hpf_old#weird here]. [[br]] [[br]] === Calibration control === As of 2020 this has all been automated into '''iexp''' and into '''cal''' (see help below). Old notes are retained [wiki:HetProcedures/RA/hpf_old/#cal here] {{{ [stevenj@zeus ~]$ cal hpf -h usage: hpfcal [-h] [-lfc [N] | -et [N]] [-et_warmup] [-et_off] [-evAC | -evA | -evC | -evAC_nolfc | -evA_nolfc | -mn | -mn_nolfc] [-mode {direct,hpf}] [--dry-run] HPF calibrations optional arguments: -h, --help show this help message and exit -lfc [N] Run LFC calibrations. Default is 1 -et [N] Run Etalon calibrations. Default is 1 -et_warmup Turn on and warmup UNeS Etalon lamp -et_off Turn off UNeS Etalon lamp -evAC Run evening AC calibrations. -evA Run evening A calibrations. -evC Run evening C calibrations. -evAC_nolfc Run evening AC calibrations. LFC is down. -evA_nolfc Run evening A calibrations. LFC is down. -mn Run morning calibrations. -mn_nolfc Run morning calibrations. LFC is down. -mode {direct,hpf} Operation mode. "direct" or "hpf". Default is "hpf" --dry-run Print actions without executing commands on the hardware }}} [[br]] === Log information on the HPF machine === #logs {{{/home/hpf/HPFics/log}}} HPF logs are not kept indefinitely, but some recent logs exist in the above path. There are logs for each system(?) * Calib * Detect * Enviro * Het * tims_cli * tims_server [[br]] === Science observing === #obs Our scripts run all of our science/engineering frames and cals in the {{{screen -x scripts}}} screen session so that any failures can be diagnosed by the HPF team. In normal operations the RA will not send commands into the Screen session, but will use {{{iexp}}} and {{{cal}}} to interact with it. The full set of manual observing commands are still available [wiki:HetProcedures/RA/hpf_old#obs here]. You can look at any spectrum using {{{showhpfspec YYYYMMDD obs#}}}. If you leave off the date and observation number it will show the last exposure. Examples: {{{showhpfspec}}} {{{showhpfspec 20180428 2}}} You can also look at slope images which have sqrt(N) less noise than the "showhpfspec" quicklook images. Use commands like: {{{(HPFpython) [hpf@hpfserver ~]$ createslopehpf 20200619 0002}}} === Aborting normal science exposures === #abort An exposure in progress can '''not''' be stopped. Beware. To stop after the current exposure (before the next ramp/exposure starts) you will need another hpf xterm (i.e., {{{ssh hpf@hpf}}} and run the command: {{{sc}}} If you need to stop one of the observing scripts, like PartA, then try a -C in the Scripts screen session ("screen:sc"). The scripts are smart enough to catch the -C and it will stop the script at the end of the current exposure (which you can monitor from the HPF monitor screen) you should be able to resume working with HPF. [[br]] == Quick-look data and SNR == #ql Starting in late March 2020, Goldilocks (Greg Zeimann's HPF data reduction pipeline) has been running in near real time. Details are here: [https://github.com/grzeimann/Goldilocks] Goldilocks runs automatically on each exposure after the next exposure starts. It saves quick-look HPF data to {{{/hetdata/data//hpf/Goldilocks/}}}. Among more quantitative output files, it also saves a PNG snapshot of order 19 (~1um). Order 19 is used to determine the continuum signal-to-noise ratio, which gets saved to the NR as "SNR", asynchronously. You can run Goldilocks manually on VDAS with something like: {{{python ~astronomer/Goldilocks/mountain_hpf_reduction.py 20200402 7}}} for the 0058 observation taken on UT date 20200402. This can take a few minutes to run, and will produce three files in whatever directory you run it: {{{Goldilocks_20200402T023958_v1.0_0008.fits}}} {{{Goldilocks_20200402T023958_v1.0_0008.spectra.fits}}} {{{Goldilocks_20200402T023958_v1.0_0008.spectra.png}}} == Monitoring HPF exposures for saturation == #sat Starting in 2022 we have a new tool to monitor real-time predictions for the final counts expected in an exposure in progress: {{{ hpf_sat_check.py }}} It runs on the workstations and produces output like the example below where it is run during a twilight exposure. I like to leave it running in a terminal all night while I am observing. When it detects a new HPF exposure in progress it measures each read's FITS file to calculate: * {{{O18flux}}} - the raw pixel counts in a small region near the center of Order 18 * {{{pedestal}}} - the raw pixel counts in an unilluminated area of the detector * {{{pixcnt}}} - the difference of {{{O18flux}}} - {{{pedestal}}} * {{{Predicted total}}} - extrapolated total counts expected, where ''65k is saturation'' {{{ [stevenj@zeus ~]$ hpf_sat_check.py checking HPF data for saturation in this UT date folder:20220108 detected latest HPF exposure: 20220108 obs #0001 requesting 28 total reads Readnum / Nreads O18flux pedestal pixcnt Predicted total Object 1 / 28 18934.0 18604.0 845.0 37200.8 twilight 2 / 28 19728.0 18617.0 1373.0 37534.2 twilight 3 / 28 20807.0 18628.0 2179.0 38965.3 twilight 4 / 28 21572.0 18637.0 2935.0 39182.0 twilight 5 / 28 22354.0 18635.0 3719.0 39461.4 twilight 6 / 28 23085.0 18668.0 4417.0 39280.7 twilight 7 / 28 23848.0 18674.0 5174.0 39370.0 twilight 8 / 28 24565.0 18683.0 5882.0 39270.0 twilight 9 / 28 25289.0 18707.0 6582.0 39184.3 twilight 10 / 28 25983.0 18711.0 7272.0 39072.6 twilight 11 / 28 26645.0 18704.0 7941.0 38917.5 twilight 12 / 28 27302.0 18685.0 8617.0 38791.3 twilight 13 / 28 27935.0 18687.0 9248.0 38605.8 twilight 14 / 28 28585.0 18728.0 9857.0 38442.0 twilight 15 / 28 29182.0 18685.0 10497.0 38279.4 twilight 16 / 28 29790.0 18720.0 11070.0 38092.5 twilight 17 / 28 30377.0 18726.0 11651.0 37915.9 twilight 18 / 28 30961.0 18725.0 12236.0 37758.8 twilight 19 / 28 31519.0 18732.0 12787.0 37576.0 twilight 20 / 28 32079.0 18743.0 13336.0 37413.4 twilight 21 / 28 32586.0 18730.0 13856.0 37204.7 twilight 22 / 28 33082.0 18719.0 14363.0 36999.2 twilight 23 / 28 33587.0 18738.0 14849.0 36815.0 twilight 24 / 28 34030.0 18747.0 15283.0 36577.2 twilight 25 / 28 34476.0 18761.0 15715.0 36361.8 twilight 26 / 28 34925.0 18742.0 16183.0 36169.8 twilight 27 / 28 35368.0 18775.0 16593.0 35982.6 twilight 28 / 28 35735.0 18748.0 16987.0 35735.0 twilight }}} If it predicts total counts above 55,000 it will play an announcement to warn the RA on duty that the HPF exposure might be close to saturation and further attention is required. [[br]] As an example, here is what saturation looks like with a too-long exposure on a too-bright star (20211201, Observation 26): [[Image(hpf_saturation_20211201_26.png, 400px)]] [[br]] And here is an example of the type of persistence we see in a [#darks dark exposure] after a saturation event - this time the saturation was the LFC. Note the faint ghostly persistent signal from the LFC despite this being a dark exposure: [[Image(hpf_dark.jpg, 400px)]] [[br]] [[br]] == Restarting TIMS client connection the HET, done daily in OPS == #ops As of 2021, this is accomplished with the RA OPS webpage interface (see [wiki:RA/overview#details this section] for more detail). The following is a description of doing the procedure manually: 1. Find the 'HET Client' window in the 'TIMS' Screen session by doing a CTRL-a n (multiple times, until the window title says 'HET Client'). 2. In the HPF Scripts window or a new hpf xterm execute : {{{ tims het shutdown }}} . This will stop the dictionary python stuff in the 'HET Client' window. 3. If you don't have a command prompt in the HET Client window, CTRL-c the client. If you do end up doing this you might want to run another tims het shutdown just to make sure everything is down. 4. Run the command 'timshet' in the HET Client screen window {{{ timshet }}} Note that this is the only TIMS client startup command that will not run with the normal startup syntax (e.g. python -m TIMS.clients.tims_het). This is because the library environment needs to be changed to access the het system libraries. We only want to do this for the HET client, so I've wrapped the normal startup command in a bash script that lives in ~/bin/ . == HPF twilight sky flats (evening) == #twi These are not required every night, but are desired when possible. Only take HPF twilights '''if the sky is completely clear'''. Brightly lit clouds can cause HPF to saturate. Run {{{ htwi }}} on zeus, the number of minutes before sunset that currently works best (3? 4?); or use [autotwi.py wiki:HetProcedures/RA/autotwi] and it will do it for you. Monitor closely on {{{hpf@hpf}}} either with {{{showhpfspec}}} (slower) or {{{ds9h}}} (faster). If saturation is imminent, deploy the FCU head manually to block the incoming light == HPF Darks and checking for saturation == #darks If you need to take a dark exposure (i.e., to check for evidence of persistence after saturation), this is the procedure (in the {{{CalScripts}}} folder). First, make sure PFIP shutter is closed. Then, run this command to take a 5 minute (30 read) dark exposure: {{{ ./hpf_setup_ToTakeDark.sh }}} {{{ ./hpf_expose.sh Dark 30 1 & }}} Then you can view the slope image (higher S/N than showhpfspec quicklook) for this with: {{{createslopehpf 20200619 0002}}} Note that taking dark exposures does not improve persistence any faster, just shows you where it is. == HPF cals in the evening and morning == #cals Currently we run our science frames and cals in the {{{screen -x scripts}}} session so that any failures can be diagnosed by the HPF team. Notes on special commands controlling cals: * When you want to pause any script before the next new set of hpf exposures start: * {{{pausehpf}}}: pause what is currently running, before the next HPF exposure sequence starts * {{{playhpf}}}: continue the paused exposure. * When you want to skip the rest of the exposure in the current ongoing multiple frame exposure sequence. * {{{sc}}}: This command is to discard all the remaining exposure in a multiple exposure command. It is meant for use in a normal science observation. If used during a cal script, it will discard the remaining exposures inside a set of ongoing exposures, and continue on to the next set of cals exposures. Note: It will not stop the rest of the cal script. * When you want to abort the morning or evening cals, **Ctrl+C** is the way to abort the script. **Ctrl+C** will trigger the wrap up scripts, and it should cleanly abort the cals scripts. (With the caveat that any ongoing exposure which has already started '''will not stop''' until that is completed.) * When you want to pause the cals before the next usage which requires FCU unit. * {{{pausecal}}} : Is the command to pause the cals before it stars a next set of observations before using the FCU head. Historically this command was made to pause the cals temporally for anybody on side to switch on the lights in the dome, or use fcu head for something else, etc. Note that it may be several minutes after issuing the "pausecal" command before it takes effect (i.e., once it gets to the next part of the script which uses the FCU head) and it is safe to, say, turn on the dome lights. * {{{playcal}}} : This command will resume the cal scripts by taking control of FCU again. ==== Evening Calibrations ==== The evening 'A' calibration sequence includes the LFC and Etalon frames (and flats) that are necessary to start the nightly drift calibration. The 'C' calibrations contain the UNe lamps, which are only present as an emergency backup source in case the LFC is offline. '''Taking the 'A' sequence is critical; skipping the 'C' sequence if you are crunched for time is OK;''' that sequence has been timed to fit within a typical stack time. * Nightly evening cals: PartA (43 minutes) and PartC (45 minutes). * Typically the Ops RA will start the PartA (sometimes A+C) cals in the afternoon, with: {{{ cd /home/hpf/Scripts/InstConfig/CalScripts ./hpf_evening_cals_PartA.sh }}} * If you don't have the 44m minutes for the full PartA cals just run the [[br]] {{{ ./matt_shortflats.sh }}} 22 minutes[[br]] or [[br]] {{{ ./matt_wave.sh }}} 14 minutes[[br]] depending on how long you have before sunset. '''The most critical cals are the wavelength (wave) cals.''' * If necessary, the Part "C" cals can be run during other activities, as they are purely internal cals. As long as we don't get in a situation where the dome lights are on and the PFIP shutter is open, it's ok. Part "C" can be run during VIRUS/LRS2 twilight spectra, during VIRUS/LRS2 lamp cals, during stacking, or even during LRS2/VIRUS science observations. You can run the "C" cals (43min) with: {{{ cd /home/hpf/Scripts/InstConfig/CalScripts ./hpf_evening_cals_PartC.sh }}} The following are the list of calibrations obtained by the {{{hpf_evening_cals_PartAC.sh}}} script ([DD]: require dark dome; [CL]: require PFIP shutter closed) and also shown are which "Part" of the cal script executes this (A or C) * [DD] [A] 5min dark exposure * [DD] [A] 7x 75s Quartz FCU exposures * [DD] [A] 4x 85s Alpha Bright Cals (using HR cal fiber) * [DD] [A] 2x 107s LFC FCU Cals (using FCU head) [[br]] -----other cals can start now, finished with FCU head----- * [CL] [A] 2x 107s LFC cal (internal) * [CL] [A] 1x 160s Etalon cal (internal) * [CL] [C] 3x 852s UNe cal (internal) [[br]] -----all done----- * You can tell if the cals have finished by confirming that the command prompt has returned in the HPF command window. If the job is still in execution, we'll see a sequence of integers being written to this window. Also, if the script is still executing, we'll see update message appearing in the HPF monitor window. * Combining A+C cals takes 1h 28m, and is uaully initiated at the end of the afternoon OPS if dome stays closed. * '''IMPORTANT NOTE on order of A,C cals''': * Part C cals must be run AFTER Part A for lamp warmup/shutdown reasons. When Part A is run, it leaves the UNe lamp on. This lamp is needed by Part C, and Part C turns it off when it finishes. However, if Part C is not run after Part A, you must turn off the lamp manually with: {{{ /home/hpf/Scripts/InstConfig/CalScripts/hpf_TurnOff.sh UNeS }}} * Similarly, if you are running Part C without first running Part A, you must turn on the UNe lamp via: {{{ /home/hpf/Scripts/InstConfig/CalScripts/hpf_TurnOn.sh UNeS sleep 600 }}} * Additional LFC wavelength cals throughout the night: We have an additional script ({{{hpf_takeLFCHRCalExposure.sh}}}) which that will take a LFC exposure through the calibration fiber (~2 minutes to complete). This should not interfere with any other telescope operations, as it is entirely internal to HPF. If you find yourself in a situation similar to last night, you can run this stand alone script a few times while a different primary operation is happening (e.g. LRS2 or VIRUS exposure, slew, or other telescope setup). That will at least provide a few LFC reference points to tie the drift calculation to. The script accepts an optional argument - N - where N is the number of ramps you wish to take. e.g. {{{ ./hpf_takeLFCHRCalExposure.sh 3 }}} You can also use the following on Zeus to run this: {{{cal hpf -l 3}}} ==== Morning ==== #morning * The night RA will start a morning calibration for HPF at the end of the night. The following exposures are executed by the {{{hpf_morning_cals.sh}}} script ([DD]: require dark dome; [CL]: require PFIP shutter closed) * [DD] 5min dark exposure * [DD] 7x 75s Quartz FCU exposures * [DD] 4x 85s Alpha Bright Cals (using HR cal fiber) * [DD] 6x 107s LFC FCU Cals (using FCU head) [[br]] -----dome lights can go on now, finished with FCU head----- * [CL] 2x 107s LFC cal (internal) * [CL] 4x 160s Etalon cal (internal) * [CL] 3x 852s UNe cal (internal) [[br]] -----all done----- * The HPF morning cals take about 52m to complete the portion requiring the FCUhead and a dark dome. An additional 1h5m with the pfip shutter closed is required to complete the full calibration script run. [wiki:HetProcedures/RA/cal_timing See here for current cal timings.] {{{ cd /home/hpf/Scripts/InstConfig/CalScripts ./hpf_morning_cals.sh }}} **If you took data for UT21-1-002 which require extremely high S/N for their targets, use the alternative morning cals script which acquires an extra 15 quartz lamps and takes 20 minutes longer.** It requires 72min with FCUhead in place and dark dome, and an additional 1h5m with the PFIP shutter closed: {{{ [hpf@hpfserver CalScripts] ./hpf_morning_cals_15xQuartz.sh }}} When the morning cal script is run it produces a text file to list the expected completion times like: {{{ [stevenj@zeus ~]$ cat /home/mcs/astronomer/Desktop/morning.txt 14:40 UT: HPF morning cals started 15:32 UT: HPF finished with dark dome 16:37 UT: HPF finished with PFIP shutter }}} As of January 2020, you can also specify that the "Dome lights off" message should remain on the sign outside the door to the dome, in case you have scheduled more (non-HPF) calibrations to run in the morning (e.g., LRS2 or VIRUS cals which did not get taken during the night). To leave the cal sign on, use:[[br]] {{{ [hpf@hpfserver CalScripts] ./hpf_morning_cals.sh leave_signs_on }}}[[br]] Then run the usual {{{cal}}} script on zeus/vdas with the {{{-eon}}} flag so that it turns off the signs after it has finished. [wiki:HetProcedures/RA/cal_timing This page] contains the latest data on measuring the time each calibration dataset requires. ==== Stopping Cals ==== If you need to stop a cal script then you can type -C in the scripts window which will stop the script but will '''NOT''' stop the current exposure. Watch the HPF monitor screen to see when the instrument stops the exposure and goes idle. [[br]] [[br]] ==== Observing if the LFC is down ==== #nolfc If the LFC is down, we usually go into "low precision" mode for HPF. PIs are required to specify Y or N to RVHIGHPREC as a keyword in their TSL files. If we do not have the LFC or have had a thermal stability problem, we should not observe targets with RVHIGHPREC=Y that night. If the LFC is unavailable we must use the Etalon lamp as a calibrator instead. Most of this is automated in our {{{iexp}}} and {{{cal}}} scripts, and will read the status of the LFC from the RA Night Report drop-down menu. * Calibrations without LFC * evening cals: {{{cal hpf -evAC_nolfc}}} * ''or use these manual commands in the Scripts screen:'' * {{{ ./hpf_evening_cals_PartA_withoutLFC.sh }}} '''53 minutes!''' * {{{ ./hpf_evening_cals_PartC.sh }}} '''no change''' * {{{ ./hpf_evening_cals_PartAC_withoutLFC.sh }}} '''xx min''' * morning cals: {{{cal hpf -mn_nolfc}}} * ''or use these manual commands in the Scripts screen:'' * {{{ ./hpf_morning_cals_withoutLFC.sh }}} '''34 min dark dome; 1 h 15 min PFIP shutter closed''' '''NOTE: you must turn on the Etalon lamp''' since the A+C evening cals automatically power it down. You can do that on zeus with: {{{cal hpf -et_warmup}}} or by running this in the HPF scripts screen: {{{./hpf_TurnOn.sh Etalon }}} The etalon is brighter than the LFC so if you take an exposure longer than 150 seconds with the etalon open you will likely saturate the detector. Thus, do not use the etalon simultaneously during an observation unless the exposures are less than 150 seconds. Instead, taken an single etalon before and after each HPF observation. [[br]] [[br]] [[br]] [[br]] == Trouble shooting and restarting == #troubleshooting ==== Checking out what might had happened ==== We used to use screen copy mode to check what might had happened in the past. Now we start logging three main sessions we are using so it's much easier to check screen sessions content. The screen essions we are logging are hpf, TIMS and scripts. All logs files are available as hpf@hpf in: /home/hpf/SESSIONS_LOG//YYYYMM/ Note that it may be more useful to use {{{less -r}}} to watch these logs because it provides a cleaner output by hiding the ESC terminal sequences. For more details about weird Screens behavior, see [wiki:HetProcedures/RA/hpf_old#weird here]. ==== When timshet communication is lost ==== #timshet ==== When TCS communications is lost ==== #tcs * Keep your email tool open. Sergey now has a routine that detects this problem and sends an email. If you see this you can fix it before it causes bad headers and we have to reject otherwise useful observations. * To "fix" the problem we first try to do a {{{timshet shutdown}}} like we do at OPS. If that fails, then we try to ctrl-\ in the TIMS client screen session (usually the upper right screen in rasessions). Either way you next need to restart the client by issuing the {{{timshet}}} command. You can up-arrow to find the last issue of the command. More details are [#ops here]. * If this does not fix the TCS communications, restart the HPF TCS relay by following the procedure here: [wiki:HetProcedures/RA/TCS#hpf] * If the header problem has already occurred, and we are missing the TCS telemetry (RA, Dec, UT, etc), the important thing to do is to determine if the observation was for high precision data. Look at the full htop record and see if the value of the "rvhighprec" entry is "Y". If it is then reject the observation as "E". * also file a PR for this issue. We'll need to include the relevant lines from the HPF log files, usually /home/hpf/SESSIONS_LOG//YYYYMM/. This will usually be the very last entry that the automated alert email shows. * If the header problem has already occurred, and we are missing the Q* metadata, try to repair the observations with the procedure described below. ==== Fits header cards/keywords for Q* metadata ==== One indication that HPF is not receiving this Q* metdata is that the Night Report entries for HPF targets will have blanks for "Q#", "Target", and "Program" fields. If you need to manually change the FITS header keywords on existing data (i.e., if the metadata was missing), follow this example below to edit the raw data and then inform Sergey and Greg to re-copy that night's data. The old method is documented [wiki:HetProcedures/RA/HPF/old#Qfix here] as well. 1. if need to fix one observation {{{username@zeus$ hpf_data_fix 20220805 4 3314}}} 2. if multiple observations need to be fixed {{{username@zeus$ for obsn in 4 5 6 7 8 9 10 11 ...; do hpf_data_fix 20220805 $obsn 3314; done}}} Also add a comment in the RA Night Report to the observations where you manually added the Q* metadata to be sure that the PI is aware of this manual intervention. While it can successfully reproduce all of the missing Q* metadata, we need to inform the PIs in case we have not fully and accurately replaced the Q* metadata. '''Note that HPF data are transferred to PSU each day at 16:00 UT, so as long as these errors are corrected before then we can recover from this problem. If we do not identify it in time, it is likely that that night's data will have to be rejected and re-observed''' === Detector client === #det If you accidentally send a "ctrl-C" in the "HPF Detect Client" screen, you should immediately press the up-arrow and re-run the command to restart the detector client. This restart will reset the observation number, so you will also need to reset the obsnum to the most recent value with {{{tims Detect pyhxrg:SetObsnum:NNN}}} command, where NNN is the last object number that was taken (it does a +1 at the beginning of the exposure sequence). Also you should notify Chad because this kind of detector client restart can cause a small temperature transient. === Fixing the calibration connection === #calcon If the etalon turn off command hangs or any of the cal commands hang, try the following: {{{ tims Calib shutdown }}} in any hpf window with a prompt In the HET TIMS screen do a '''cntrl-a n''' about five times to move to screen 2 which is the TIMS Calib screen. Note that we must hit cntl-a then hit the n (for next screen). This full sequence is repeated to go to the next screen. NB If in RASESSION instead of just Ctrl-a one must press Ctrl-a a (press ctrl-a and then a again).. In the TIMS Calib screen you should do an up arrow and see the {{{ python -m TIMS.clients.tims_calib }}} command which restarted the Calib client. Wait a few minutes for it to finish. Take a quick lfc image and perhaps a quick sky exposure to make sure all of the shutters and baffles are working. [[br]] === Listening to the fiber scrambler === #scrambler As of Feb 2019, you can use the "RA" knob on the audio box (under the UT clock) to control the volume of the audio feed from HPF calibration enclosure which plays on the speaker under the RA desk. You can also connect to this audio feed with VLC media player on MCS via: {{{ cvlc rtp://@:4444 --sout-rtp-caching 40 }}} (drop the leading "c" if you want a UI) Or use web browser {{{http://192.168.66.90/xstream}}} [[br]] === How to stop the client (if stuck exposing) === #expose If the TIMS detector client window is repeatedly saying something like "Sending Cli156197: Detect Cli156197 GetState:exposing" then it may be stuck exposing and require power-cycling the detector. This happens when some buffer gets full, every 6-9 months or so. If you need to completely power off the detector and restart it, you can run: {{{ tims detect pyhxrg:GetObsnum touch ~/stopdetectorloop tims detect pyhxrg:PowerDown tims detect shutdown }}} This will create a file in $HOME called stopdetectorloop. If that file exists, the infinite loop will not continue following the normal powerdown/shutdown sequence. * '''There is a 10 s pause between completing the power down and auto-restarting. You can press CTRL+c in the 'hpf' screen session during that time to abort the script.''' * When it comes back up, be sure to set the Obsnum with {{{tims Detect pyhxrg:SetObsnum:NNN}}} where NNN was the last exposure taken (it adds one before taking the new exposure). You should see something like {{{ Configured HxRG Sending Cli15380: Detect Cli15380 PowerDown:OK Resetting Asic for JADE1 ... Success Powering Down Analog Supply Voltages: VDDA ... Success Powering Down Digital Supply Voltages: VDD3p3 ... Success VDD2p5 ... Success VDDIO ... Success VSSIO ... Success Powering Down Voltage Regulators: VDDIO ... Success VDD2p5 ... Success VDD2p5 ... Success Powering Down Analog Main: 5V ... Success De-Initializing Hal Powered Down. }}} {{{ Shutting down this client. Stopping scheduled loops: ...lakeshore336 Shutting down subclients ...pyhxrg ......Failed ...lakeshore336 ......Success Sending Cli16420: Detect Cli1642 * tims detect pyhxrg:SetObstype:Sci0 message:Detect_shuttingdown Shutting down... Connection lost to: lakeshore336 Reason : [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly. ] Closing connection to server Client stopped. }}} When you restart the system you should check to see if it has the correct observation number (assuming you have not rolled through 0 UT). There is more information [wiki:Het_Procedures/RA/hpf/old#restart here] if you want to review that. [[br]] '''NB''' After the restart, it's important to take LFC calls. If the problem still exists and LFC cals won't save to disk, you should get in touch with George or Herman to restart the HPF driver. [[br]] For more details about weird Screens behavior, see [wiki:HetProcedures/RA/hpf_old#weird here].