Sébastien,
(FW to the ML, as this could be useful for many):
Le 13/05/2014 16:10, Sébastien Carniato a écrit :
> Hi Thomas,
> thank you for your quick answer !
>
> Indeed, it works when I use only Decimate. I have to check again, but
> I think indeed that my data stream are using the same sample rate.
You can also check that with the SQLiteManager (group data_availability
by "sampling_rate")
> I have a bench of other questions :
>
> *Question 1 : *So now that compute_cc works :), _I would like to know
> if the correlations computed are stored_, and in that case where ? I
> put the output folder on my desk with the configurator, but nothing
> was created so far...
If you set "keep_all" to "Y", the 30 minutes CC are stored in the
CROSS_CORRELATION folder, named "hh_mm.cc", and these are miniseed files
(I know, bad naming, this is going to change in the future).
>
> *Question 2 :* My data are not continuous. Indeed they are a lot of
> days missing, and the maximum length of recording is one day and a
> half. _Is it a problem for running MSNoise_ ?
Well, MSNoise does correlate M minutes (30 minutes by default) windows,
if data is missing, the CC will be corrupted (NaN or infs) and will not
be stored. So, normally no problem. BUT, expect results to be quite
strange if you have ony like 20% of the day filled with good data.
>
> *Question 3 :* The only way to get a filter is to create one with the
> sqlite manager. Is it normal ?
No, there is a bug in the Configurator which doesn't show any empty line.
> *
> *
> *Question 4 : *and the last one : Now when I launch compute_cc, i get
> the error :
>
> 2014-05-13 16:04:21,314 [INFO] *** Starting: Compute CC ***
> 2014-05-13 16:04:21,354 [INFO] Will compute ZZ
> 2014-05-13 16:04:21,479 [INFO] New CC Job: 2011-01-18 (6 pairs with 3
> stations)
> AN.LF1:AN.LF1
> 2014-05-13 16:04:21,595 [DEBUG] Processing pair: AN.LF1 vs AN.LF1
> <msnoise_table_def.Station object at 0x4a06ed0>
> <msnoise_table_def.Station object at 0x4a06ed0>
> s03compute_cc.py:407: DeprecationWarning: using a non-integer number
> instead of an integer will result in an error in the future
> trames2hWb[i] = np.zeros(Nfft)
> s03compute_cc.py:407: DeprecationWarning: using a non-integer number
> instead of an integer will result in an error in the future
> trames2hWb[i] = np.zeros(Nfft)
> /home/stag01/Bureau/MSNoise-1.2.3/myCorr.py:68: RuntimeWarning:
> invalid value encountered in divide
> corr /= np.real(normFact)
> Traceback (most recent call last):
> File "s03compute_cc.py", line 412, in <module>
> "%Y-%m-%d", time.gmtime(basetime + itranche * min30 / fe))
> NameError: name 'basetime' is not defined
>
> Do you know what can be the source of the problem ?
Well, first, you should not do Autocorrelation with this version of
MSNoise. It'll work normally, but the result is wrong. I'll push a new
release very soon that corrects that. But, this problem seems
independent. Reading your next mail, it could be related to the file
reading part, but it does look strange...
Best regards,
Thomas
Thomas,
When trying to use s04stack.py, I keep getting the message of "Found 000 updated days" when it is not true. I have tried changing the "–i" interval to no avail (e.g. s04stack.py -m -r -i 100). I think I have traced the problem to the following queries in database_tools.py:
if pair == '%':
days = session.query(Job).filter(Job.day >= date1).filter(Job.day <= date2).filter(Job.type == type).filter(Job.lastmod >= lastmod).group_by(Job.day).order_by(Job.day).all()
else:
days = session.query(Job).filter(Job.pair == pair).filter(Job.day >= date1).filter(Job.day <= date2).filter(Job.type == type).filter(Job.lastmod >= lastmod).group_by(Job.day).order_by(Job.day).all()
The call from s04stack.py goes to the ELSE conditions, but days == [ ] after the query.
If I manually set 'pair = '%' on the line before, then 'days' returns what looks to be correct. As the only difference in the two queries is filtering by Job.pair, maybe that is the problem. I am attaching a screenshot of my jobs table. I manually changed 'lastmod' at one point to yesterday in case the –i switch was not working correctly.
I am on Mac OS X.
Thanks,
Rob
+----------------------------+
Robert E. Abbott, Ph.D.
Sandia National Laboratories
Geophysics Department MS 0750
P.O. Box 5800
Albuquerque, NM 87185-0750
(505) 845-0266
+----------------------------+
+----------------------------+
Robert E. Abbott, Ph.D.
Sandia National Laboratories
Geophysics Department MS 0750
P.O. Box 5800
Albuquerque, NM 87185-0750
(505) 845-0266
+----------------------------+
Hi Oscar,
(cc to the mailing list, it can be very useful to others too)
Le 13/05/2014 17:11, Oscar Alberto Castro Artola a écrit :
> Hello Thomas,
>
> I have checked for CCF for 2006-01-01 and is not empty. Everything
> seems to be alright! Although I realized that when my filters are too
> narrow I get this error:
>
> *oscar@bayta:~/sweet_noise$ py s05compute_mwcs.py*
> 2014-05-13 10:03:03,361 [INFO] We will recompute all MWCS based on the
> new REF for TO.BUCU:TO.XALI
> 2014-05-13 10:03:03,798 [INFO] We will recompute all MWCS based on the
> new REF for TO.PALM:TO.PLLI
> 2014-05-13 10:03:04,213 [INFO] We will recompute all MWCS based on the
> new REF for TO.PALM:TO.XALI
> 2014-05-13 10:03:04,609 [INFO] We will recompute all MWCS based on the
> new REF for TO.PLLI:TO.XALI
> 2014-05-13 10:03:05,989 [INFO] There are MWCS jobs for some days to
> recompute for TO.BUCU:TO.XALI
> 2014-05-13 10:03:06,009 [DEBUG] Processing MWCS for:
> TO_BUCU_TO_XALI.ZZ.04 - 2006-01-06 - 01 days
> /home/oscar/anaconda/lib/python2.7/site-packages/numpy/core/_methods.py:55:
> RuntimeWarning: Mean of empty slice.
> warnings.warn("Mean of empty slice.", RuntimeWarning)
> /home/oscar/anaconda/lib/python2.7/site-packages/numpy/core/_methods.py:77:
> RuntimeWarning: Degrees of freedom <= 0 for slice
> warnings.warn("Degrees of freedom <= 0 for slice", RuntimeWarning)
> Traceback (most recent call last):
> File "s05compute_mwcs.py", line 122, in <module>
> cur, ref, f.mwcs_low, f.mwcs_high, goal_sampling_rate, -maxlag,
> f.mwcs_wlen, f.mwcs_step)
> File "/home/oscar/sweet_noise/MWCS.py", line 145, in mwcs
> res = sm.regression.linear_model.WLS(phi, v, w**2).fit()
> File
> "/home/oscar/anaconda/lib/python2.7/site-packages/statsmodels/regression/linear_model.py",
> line 381, in __init__
> weights=weights, hasconst=hasconst)
> File
> "/home/oscar/anaconda/lib/python2.7/site-packages/statsmodels/regression/linear_model.py",
> line 79, in __init__
> super(RegressionModel, self).__init__(endog, exog, **kwargs)
> File
> "/home/oscar/anaconda/lib/python2.7/site-packages/statsmodels/base/model.py",
> line 137, in __init__
> self.initialize()
> File
> "/home/oscar/anaconda/lib/python2.7/site-packages/statsmodels/regression/linear_model.py",
> line 88, in initialize
> self.rank = rank(self.exog)
> File
> "/home/oscar/anaconda/lib/python2.7/site-packages/statsmodels/tools/tools.py",
> line 381, in rank
> D = svdvals(X)
> File
> "/home/oscar/anaconda/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py",
> line 146, in svdvals
> check_finite=check_finite)
> File
> "/home/oscar/anaconda/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py",
> line 100, in svd
> full_matrices=full_matrices, overwrite_a=overwrite_a)
> ValueError: failed to create intent(cache|hide)|optional array-- must
> have defined dimensions but got (0,)
>
> That filter is between 0.04 - 0.046 Hz.
1/ the reason for the first error (filters too narrow) is that you don't
have enough data points for doing weighted linear regression. If you use
a 10 seconds MWCS_wlen at 20Hz, you have 200 points of data to compute.
The FFT is done on the next power of 2 samples, so 256 here. Meaning,
you have 256 FFT datapoints between 0 and Nyquist=10Hz, or a frequency
step of 0.039 Hz. You filter is 0.006 Hz wide, so, at best, you'll have
1 data point (+ [0,0]) to compute the phase shift, this is not enough.
If you want to use such a narrow band, you should either CC at 100 Hz,
or edit the s05 file to padd the input data with zeros, in order to have
more resolution on the FFT (go up to the second or third next power of 2).
> When my filters are broader there is no problem with
> /s05compute_mwcs.py/, but when I try to run /s06compute_dtt.py/ I get
> this:
>
> *oscar@bayta:~/sweet_noise$ py s06compute_dtt.py *
> */home/oscar/anaconda/lib/python2.7/site-packages/numpy/oldnumeric/__init__.py:11:
> ModuleDeprecationWarning: The oldnumeric module will be dropped in
> Numpy 1.9*
> * warnings.warn(_msg, ModuleDeprecationWarning)*
> *2014-05-13 10:07:17,935 [INFO] *** Starting: Compute DT/T ****
> */home/oscar/anaconda/lib/python2.7/site-packages/setuptools-2.2-py2.7.egg/pkg_resources.py:991:
> UserWarning: /home/oscar/.python-eggs is writable by group/others and
> vulnerable to attack when used with get_resource_filename. Consider a
> more secure location (set with .set_extraction_path or the
> PYTHON_EGG_CACHE environment variable).*
> *2014-05-13 10:07:18,139 [DEBUG] Found 731 updated days*
> *2014-05-13 10:07:18,203 [INFO] Loading mov=1 days for filter=12*
> *2014-05-13 10:07:18,204 [DEBUG] Processing 2005-01-01*
> *2014-05-13 10:07:18,216 [DEBUG] Processing 2005-01-02*
> *2014-05-13 10:07:18,222 [DEBUG] Processing 2005-01-03*
> *2014-05-13 10:07:18,228 [DEBUG] Processing 2005-01-04*
> *2014-05-13 10:07:18,235 [DEBUG] Processing 2005-01-05*
> *2014-05-13 10:07:18,243 [DEBUG] Processing 2005-01-06*
> *2014-05-13 10:07:18,250 [DEBUG] Processing 2005-01-07*
> *2014-05-13 10:07:18,257 [DEBUG] Processing 2005-01-08*
> *2014-05-13 10:07:18,263 [DEBUG] Processing 2005-01-09*
> *2014-05-13 10:07:18,271 [DEBUG] Processing 2005-01-10*
> *2014-05-13 10:07:18,278 [DEBUG] Processing 2005-01-11*
> *2014-05-13 10:07:18,284 [DEBUG] Processing 2005-01-12*
> *Traceback (most recent call last):*
> * File "s06compute_dtt.py", line 304, in <module>*
> * VecXfilt, prepend=False)*
> * File
> "/home/oscar/anaconda/lib/python2.7/site-packages/statsmodels/tools/tools.py",
> line 289, in add_constant*
> * var0 = data.var(0) == 0*
> * File
> "/home/oscar/anaconda/lib/python2.7/site-packages/numpy/core/_methods.py",
> line 111, in _var*
> * ret = ret.dtype.type(ret / rcount)*
> *AttributeError: 'float' object has no attribute 'dtype'*
>
2/ the second error is a bug in numpy 1.8.0, you should try to update
numpy to 1.8.1 ! This is not MSNoise :-)
Best regards,
Thomas
Thomas, thanks for the reply. I did find some empty DTT days, and fixed
that. Now, I get a different error:
/usr/lib64/python2.7/site-packages/pandas/core/common.py:195:
DeprecationWarning: numpy boolean negative (the unary `-` operator) is
deprecated, use the bitwise_xor (the `^` operator) or the logical_xor
function instead.
return -res
Traceback (most recent call last):
File "s07plot_dtt.py", line 130, in <module>
plt.fill_between(ALL.index,ALL[dttname]-ALL[errname],ALL[dttname]+ALL[errname],lw=1,color='red',zorder=-1,alpha=0.3)
File "/usr/lib64/python2.7/site-packages/matplotlib/pyplot.py", line
2757, in fill_between
interpolate=interpolate, **kwargs)
File "/usr/lib64/python2.7/site-packages/matplotlib/axes.py", line 6988,
in fill_between
x = ma.masked_invalid(self.convert_xunits(x))
File "/usr/lib64/python2.7/site-packages/numpy/ma/core.py", line 2244, in
masked_invalid
condition = ~(np.isfinite(a))
TypeError: ufunc 'isfinite' not supported for the input types, and the
inputs could not be safely coerced to any supported types according to the
casting rule ''safe''
But if I comment out the line that caused this, the process works, and
attached is the output. All is quiet under Auckland, I am happy to report.
Cheers,
Kasper
On 12 May 2014 19:02, Thomas Lecocq <thomas.lecocq(a)oma.be> wrote:
> Kasper,
>
> When you changed the mov_stack value, did you re-run MSNoise "from
> scratch". The thing is, if jobs have already been done, say, for January
> 1 -> January 10, then you modify the mov_stack, and run s04_stack.py a
> few days after, Jan1->10 will not be stacked for the 30 day mov_stack.
> you have, at least, to run s04stack.py with a "--interval X" (X > 1,
> your case probably 20 or 30 max), so the CC jobs in the last 30 days
> will be re-considered for stacking.
>
> As of the plot problem, I've noticed a slight bug in the get_stations()
> in the database_tools, but that should only affect the results iif the
> stations were input manually in the database and thus not necessarily in
> alphabetical order:
> (bugfix:
> https://github.com/ROBelgium/MSNoise/commit/0dca3cfe101c56f3ed8f6e537412857…
> ).
> In your case, it looks like the 1-day data is not loaded, could you
> confirm the data is present in the folders/files archive ? You could
> also print the "day" variable, to know which file is "empty" and causes
> the bug. Normally, empty dtt files should not exist...
>
> Let me know,
>
> Tom
>
>
> Le 09/05/2014 02:18, Kasper van Wijk a écrit :
> > Dear All,
> >
> > The whole cron.sh was humming a long nicely on our MSNoise network,
> until I
> > changed from stacking 1,2 and 5 days (initially) to 1,2,3,4,5,30 days.
> Now,
> > all steps seem to run UNTIL the plot command. Here is the error (after
> > putting a print statement on the cause of the problem, df):
> >
> > (I run as sudo, because root is running the cron.sh overnight:)
> >
> > [kasper@localhost MSNoise-master]$ sudo python s07plot_dtt.py
> > loading 1 days
> > Empty DataFrame
> > Columns: [A, EA, EM, EM0, M, M0, Pairs]
> > Index: []
> > Empty DataFrame
> > Columns: [A, EA, EM, EM0, M, M0, Pairs]
> > Index: []
> > Traceback (most recent call last):
> > File "s07plot_dtt.py", line 102, in <module>
> > alldf = alldf.append(df)
> > File "/usr/lib64/python2.7/site-packages/pandas/core/frame.py", line
> > 4266, in append
> > verify_integrity=verify_integrity)
> > File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
> > 883, in concat
> > return op.get_result()
> > File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
> > 964, in get_result
> > new_data = self._get_concatenated_data()
> > File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
> > 1007, in _get_concatenated_data
> > new_data[item] = self._concat_single_item(rdata, item)
> > File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
> > 1094, in _concat_single_item
> > return com._concat_compat(to_concat, axis=self.axis - 1)
> > File "/usr/lib64/python2.7/site-packages/pandas/core/common.py", line
> > 1191, in _concat_compat
> > axis=axis)
> > ValueError: need at least one array to concatenate
> > _______________________________________________
> > MSNoise mailing list
> > MSNoise(a)mailman-as.oma.be
> > http://mailman-as.oma.be/mailman/listinfo/msnoise
>
> _______________________________________________
> MSNoise mailing list
> MSNoise(a)mailman-as.oma.be
> http://mailman-as.oma.be/mailman/listinfo/msnoise
>
Hello MsNoise users,
when trying to launch s03compute_cc.py, I get the error :
/appl/anaconda/lib/python2.7/site-packages/numpy-1.8.1-py2.7-linux-x86_64.egg/numpy/oldnumeric/__init__.py:11:
ModuleDeprecationWarning: The oldnumeric module will be dropped in Numpy 1.9
warnings.warn(_msg, ModuleDeprecationWarning)
Traceback (most recent call last):
File "s03compute_cc.py", line 70, in <module>
from scikits.samplerate import resample
File "/home/stag01/Bureau/MSNoise-1.2.3/scikits/samplerate/__init__.py",
line 7, in <module>
from _samplerate import resample, available_convertors,
src_version_str, \
File "/appl/anaconda/lib/python2.7/site-packages/pyximport/pyximport.py",
line 431, in load_module
language_level=self.language_level)
File "/appl/anaconda/lib/python2.7/site-packages/pyximport/pyximport.py",
line 210, in load_module
mod = imp.load_dynamic(name, so_path)
ImportError: Building module scikits.samplerate._samplerate failed:
['ImportError:
/home/stag01/.pyxbld/lib.linux-x86_64-2.7/scikits/samplerate/_samplerate.so:
undefined symbol: src_get_description\n']
We tried to reinstall scikits, but it happened to be useless...
I'm working on linux.
Thanks.
Sebastien Carniato
ENSG Student
Dear All,
The whole cron.sh was humming a long nicely on our MSNoise network, until I
changed from stacking 1,2 and 5 days (initially) to 1,2,3,4,5,30 days. Now,
all steps seem to run UNTIL the plot command. Here is the error (after
putting a print statement on the cause of the problem, df):
(I run as sudo, because root is running the cron.sh overnight:)
[kasper@localhost MSNoise-master]$ sudo python s07plot_dtt.py
loading 1 days
Empty DataFrame
Columns: [A, EA, EM, EM0, M, M0, Pairs]
Index: []
Empty DataFrame
Columns: [A, EA, EM, EM0, M, M0, Pairs]
Index: []
Traceback (most recent call last):
File "s07plot_dtt.py", line 102, in <module>
alldf = alldf.append(df)
File "/usr/lib64/python2.7/site-packages/pandas/core/frame.py", line
4266, in append
verify_integrity=verify_integrity)
File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
883, in concat
return op.get_result()
File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
964, in get_result
new_data = self._get_concatenated_data()
File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
1007, in _get_concatenated_data
new_data[item] = self._concat_single_item(rdata, item)
File "/usr/lib64/python2.7/site-packages/pandas/tools/merge.py", line
1094, in _concat_single_item
return com._concat_compat(to_concat, axis=self.axis - 1)
File "/usr/lib64/python2.7/site-packages/pandas/core/common.py", line
1191, in _concat_compat
axis=axis)
ValueError: need at least one array to concatenate
Hi Thomas;
Congratulations for this new paper! I can’t wait to use those new features.
Esteban
Graduate Student in Seismology
University of California Santa Cruz
Santa Cruz, California 95064
echavess(a)ucsc.edu
On May 5, 2014, at 5:00 AM, msnoise-request(a)mailman-as.oma.be wrote:
> Send MSNoise mailing list submissions to
> msnoise(a)mailman-as.oma.be
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman-as.oma.be/mailman/listinfo/msnoise
> or, via email, send a message with subject or body 'help' to
> msnoise-request(a)mailman-as.oma.be
>
> You can reach the person managing the list at
> msnoise-owner(a)mailman-as.oma.be
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of MSNoise digest..."
>
>
> Today's Topics:
>
> 1. MSNoise in SRL (Thomas Lecocq)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 05 May 2014 12:55:53 +0200
> From: Thomas Lecocq <thomas.lecocq(a)oma.be>
> To: msnoise(a)mailman-as.oma.be
> Subject: [MSNoise] MSNoise in SRL
> Message-ID: <53676E39.9040407(a)oma.be>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Dear MSNoise users,
>
> I'm very pleased to announce that the paper Corentin, Florent and I
> submitted 1 year ago has finally been published in the Electronic
> Seismologist (ES) column in the May/June issue of Seismological Research
> Letters ! The great advantage of ES is that it is fully open access !!
>
> here is the link :
> http://srl.geoscienceworld.org/content/85/3/715.full
>
>
> Now that 1.2.4 is out (some odd bugs were corrected), I'm planning on
> those changes/additions:
>
> for 1.2.5:
> - correctly computing autocorrelation
> - correcting the cron.sh code and better logging
>
>> = 1.3:
> - adding support for instrument response removal
> - adding the new configuration interface (you're going to love this
> one !)
> - adding more plot_* functions / interaction examples
>
> Best regards from Brussels,
>
> Thomas
>
> --
> Dr. Thomas Lecocq
> Geologist
>
> Seismology - Gravimetry
> Royal Observatory of Belgium
>
> *
> * * * * *
> * * * *
> ---------
> http://www.seismology.be
> http://twitter.com/#!/Seismologie_be
>
>
>
> ------------------------------
>
> _______________________________________________
> MSNoise mailing list
> MSNoise(a)mailman-as.oma.be
> http://mailman-as.oma.be/mailman/listinfo/msnoise
>
>
> End of MSNoise Digest, Vol 7, Issue 3
> *************************************
Dear MSNoise users,
I'm very pleased to announce that the paper Corentin, Florent and I
submitted 1 year ago has finally been published in the Electronic
Seismologist (ES) column in the May/June issue of Seismological Research
Letters ! The great advantage of ES is that it is fully open access !!
here is the link :
http://srl.geoscienceworld.org/content/85/3/715.full
Now that 1.2.4 is out (some odd bugs were corrected), I'm planning on
those changes/additions:
for 1.2.5:
- correctly computing autocorrelation
- correcting the cron.sh code and better logging
>= 1.3:
- adding support for instrument response removal
- adding the new configuration interface (you're going to love this
one !)
- adding more plot_* functions / interaction examples
Best regards from Brussels,
Thomas
--
Dr. Thomas Lecocq
Geologist
Seismology - Gravimetry
Royal Observatory of Belgium
*
* * * * *
* * * *
---------
http://www.seismology.behttp://twitter.com/#!/Seismologie_be
Dear mailing list,
I have computed 001_day_stacks between a station
pair for the year 2013. Of course, there are some gaps/lack
of data
during the year, so I do not have daily stacks for some days of
2013.
When I move to the next step of stacking these 001_day_stacks,
new files are appearing as stacks, with dates on
which I didn't have
data.
For example, I have 001_day_stacks for February, from 5th to
28th, except from the days 9,10,14,15,22, and 23.
After the 005_days
stacking, there are some files that are appearing as 005_days stacks
with dates form the days
(9,10,14,15,22, and 23) on which I didn't have
data. The same thing happens for the other months also. It seems that,
somehow, the missing days within the months are filled after
stacking.
Additionally, some other files as 005_days stacks are
appearing with dates after the 31st of December 2013 (for example
2014-01-01.MSEED, 2014-01-02.MSEED...).
I have attached 3 screen-shots
of the folders that are containing the files for daily
cross-correlations, 005_days_stacks
and 010_days_stacks for helping you
have a picture of what I am talking about. You can observe the same
problem
in the 010_days stacks.
Which daily stacks created the
005_days stack of 2013-02-10.MSEED for example?
Or which daily stacks
created the 005_days stack of 2014-01-04.MSEED?
Is this normal or do I
have to arrange something before the stacking process?
Thank you very
much in advance.
Dimitris