Dear Thomas and msnoise users
I am trying for the very first time to install and run msnoise on my linux
machine (ubuntu 12.04 Precise).
The bugreport seems me ok (see the end of my mail) both with the option -s
and -m.
My msnoise test also works apparently fine (output of my test as attach).
My problem is with msnoise install. I obtain a list of unclear messages (at
least for me) with no option for selecting sqlite or mysql (see also the
end of my mail)
Any hints are welcome!
Thanks for your work on this package
and my best regards
Giuseppe Di Giulio
*************************************
My bugreport
(snakes)dati@salomon-OptiPlex-9010:~/P$ sudo msnoise bugreport -s
Let's Bug Report MSNoise !
************* Computer Report *************
----------------+SYSTEM+-------------------
Linux
salomon-OptiPlex-9010
3.2.0-45-generic
#70-Ubuntu SMP Wed May 29 20:12:06 UTC 2013
x86_64
x86_64
debian - wheezy/sid -
----------------+PYTHON+-------------------
Python:3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:16:01)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
****************************************
************************************
My msnoise install
(snakes)dati@salomon-OptiPlex-9010:~/P$ sudo msnoise install
Launching the installer
Welcome to MSNoise
What database technology do you want to use?
[1] sqlite
[2] mysql
Traceback (most recent call last):
File "/home/dati/anaconda2/envs/snakes/bin/msnoise", line 9, in <module>
load_entry_point('msnoise==0+unknown', 'console_scripts', 'msnoise')()
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/msnoise/scripts/msnoise.py",
line 614, in run
cli(obj={})
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/click/core.py",
line 716, in __call__
return self.main(*args, **kwargs)
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/click/core.py",
line 696, in main
rv = self.invoke(ctx)
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/click/core.py",
line 1060, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/click/core.py",
line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/click/core.py",
line 534, in invoke
return callback(*args, **kwargs)
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/msnoise/scripts/msnoise.py",
line 204, in install
main()
File
"/home/dati/anaconda2/envs/snakes/lib/python3.5/site-packages/msnoise/s000installer.py",
line 48, in main
tech = int(raw_input('Choice:'))
NameError: name 'raw_input' is not defined
**********************************************+
Hi All;
I would say that before processing a certain dataset it is important to get to know your data and define what are you looking for,
or scientific question. If one knows or understands the source that is generating the random wave field and its interaction with the medium and the susceptibility to changes then it is easier to select a dataset, moving windows, frequencies, etc.
For instance, Initially small subsets centered at some specific transient event work well.
Esteban
> On May 2, 2016, at 12:44 AM, msnoise-request(a)mailman-as.oma.be wrote:
>
> Send MSNoise mailing list submissions to
> msnoise(a)mailman-as.oma.be
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman-as.oma.be/mailman/listinfo/msnoise
> or, via email, send a message with subject or body 'help' to
> msnoise-request(a)mailman-as.oma.be
>
> You can reach the person managing the list at
> msnoise-owner(a)mailman-as.oma.be
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of MSNoise digest..."
>
>
> Today's Topics:
>
> 1. Re: advice on processing database subsets (Lukas Preiswerk)
> 2. Re: advice on processing database subsets (Thomas Lecocq)
> 3. Re: advice on processing database subsets (Phil Cummins)
> 4. Re: advice on processing database subsets (Thomas Lecocq)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 1 May 2016 16:52:06 +0200
> From: Lukas Preiswerk <preiswerk(a)vaw.baug.ethz.ch>
> To: Python Package for Monitoring Seismic Velocity Changes using
> Ambient Seismic Noise <msnoise(a)mailman-as.oma.be>
> Subject: Re: [MSNoise] advice on processing database subsets
> Message-ID:
> <CAOSnoQ3rCc5U7i9TVTBMKngZfcM1gCYT7paudi9Hg=x=rCSG3g(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hi all
>
> I was in a similar situation as Phil, and I used (1). It?s not
> straightforward to copy the database and make msnoise work again in a new
> directory. But it?s definitely possible.
> I actually think it would be a nice addition to msnoise to not only make an
> option for multiple filters, but also for multiple other parameters (window
> lengths, overlaps, windsorizing, etc.). This would really help in the first
> ?exploratory phase? to find out what is the best way to process your
> dataset.
> What do you think of this idea? Practically I would implement it by moving
> these parameters (window length etc.) to the filter parameters, and treat
> it in the same way as an additional filter. As far as I understand the
> code, this wouldn?t require many adaptions?
>
> Lukas
>
>
>
> 2016-05-01 11:35 GMT+02:00 Thomas Lecocq <Thomas.Lecocq(a)seismology.be>:
>
>> Hi Phil,
>>
>> I'd say (3) would be better indeed. You can script msnoise using the api.
>> If you need to change params in the config, you can alternatively use the
>> "msnoise config --set name=value" command.
>>
>> Please keep me updated of your progresses & tests !
>>
>> Thomas
>>
>>
>>
>> On 01/05/2016 10:34, Phil Cummins wrote:
>>
>>> Hi again,
>>>
>>> As some of you may recall, I'm just getting started with msnoise. I have
>>> a large database and have managed to get my station and data availability
>>> tables populated.
>>> At this point, rather than running through the whole database, processing
>>> it with parameters I hope might work, I'd rather process small subsets,
>>> e.g. 1 day at a time, to experiment with window lengths, overlaps, etc., to
>>> find what seems optimal. My question is, what's the best way to process
>>> subsets of my database?
>>> It seems to me I have several options:
>>> (1) Make separate databases for each subset I want to test, and run
>>> through the workflow on each
>>> (2) Set start and end times appropriate for my subset, re-scan and
>>> run through the workflow.
>>> (3) Populate the jobs table, and write a script to activate only the
>>> jobs I want and not the others.
>>> I want to a fair bit of testing using different parameters before I run
>>> through the whole thing, so I think (3) may be best. But any advice would
>>> be appreciated.
>>> Regards,
>>>
>>> - Phil
>>> _______________________________________________
>>> MSNoise mailing list
>>> MSNoise(a)mailman-as.oma.be
>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>
>>
>> _______________________________________________
>> MSNoise mailing list
>> MSNoise(a)mailman-as.oma.be
>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>
>
>
> ------------------------------
>
> Message: 2
> Date: Sun, 1 May 2016 20:18:26 +0200
> From: Thomas Lecocq <Thomas.Lecocq(a)seismology.be>
> To: msnoise(a)mailman-as.oma.be
> Subject: Re: [MSNoise] advice on processing database subsets
> Message-ID: <8ee5b4f1-ce82-fa13-b898-9ddd1743451e(a)seismology.be>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Hi guys,
>
> Yeah, I have been thinking about a "benchmark" mode for quite a number
> of weeks, i.e. since I tested a first run of PWS in order to compare the
> final dv/v ; to compare properly I have to test quite a number of
> parameters.
>
> My current idea is to run a set of possible parameters, for different
> steps. This would lead to a large number of branches in a large tree,
> but it would definitively be quite interesting.
>
> I am really not in favor of duplicating the database, rather to create
> a "config" file with an caller script, to set/change/ parameters...
> Theoretically, the API should let you do all the actions. The only thing
> that would be a little trickier is to store/reuse the results of each
> step in order to compare them. For info, using the "shutil" module you
> can move/copy files easily.
>
> Let's keep brainstorming on that and see how it goes !
>
> Cheers
>
> Thomas
>
> On 01/05/2016 16:52, Lukas Preiswerk wrote:
>> Hi all
>>
>> I was in a similar situation as Phil, and I used (1). It?s not
>> straightforward to copy the database and make msnoise work again in a new
>> directory. But it?s definitely possible.
>> I actually think it would be a nice addition to msnoise to not only make an
>> option for multiple filters, but also for multiple other parameters (window
>> lengths, overlaps, windsorizing, etc.). This would really help in the first
>> ?exploratory phase? to find out what is the best way to process your
>> dataset.
>> What do you think of this idea? Practically I would implement it by moving
>> these parameters (window length etc.) to the filter parameters, and treat
>> it in the same way as an additional filter. As far as I understand the
>> code, this wouldn?t require many adaptions?
>>
>> Lukas
>>
>>
>>
>> 2016-05-01 11:35 GMT+02:00 Thomas Lecocq <Thomas.Lecocq(a)seismology.be>:
>>
>>> Hi Phil,
>>>
>>> I'd say (3) would be better indeed. You can script msnoise using the api.
>>> If you need to change params in the config, you can alternatively use the
>>> "msnoise config --set name=value" command.
>>>
>>> Please keep me updated of your progresses & tests !
>>>
>>> Thomas
>>>
>>>
>>>
>>> On 01/05/2016 10:34, Phil Cummins wrote:
>>>
>>>> Hi again,
>>>>
>>>> As some of you may recall, I'm just getting started with msnoise. I have
>>>> a large database and have managed to get my station and data availability
>>>> tables populated.
>>>> At this point, rather than running through the whole database, processing
>>>> it with parameters I hope might work, I'd rather process small subsets,
>>>> e.g. 1 day at a time, to experiment with window lengths, overlaps, etc., to
>>>> find what seems optimal. My question is, what's the best way to process
>>>> subsets of my database?
>>>> It seems to me I have several options:
>>>> (1) Make separate databases for each subset I want to test, and run
>>>> through the workflow on each
>>>> (2) Set start and end times appropriate for my subset, re-scan and
>>>> run through the workflow.
>>>> (3) Populate the jobs table, and write a script to activate only the
>>>> jobs I want and not the others.
>>>> I want to a fair bit of testing using different parameters before I run
>>>> through the whole thing, so I think (3) may be best. But any advice would
>>>> be appreciated.
>>>> Regards,
>>>>
>>>> - Phil
>>>> _______________________________________________
>>>> MSNoise mailing list
>>>> MSNoise(a)mailman-as.oma.be
>>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>>
>>> _______________________________________________
>>> MSNoise mailing list
>>> MSNoise(a)mailman-as.oma.be
>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>
>> _______________________________________________
>> MSNoise mailing list
>> MSNoise(a)mailman-as.oma.be
>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 2 May 2016 17:41:20 +1000
> From: Phil Cummins <phil.cummins(a)anu.edu.au>
> To: Python Package for Monitoring Seismic Velocity Changes using
> Ambient Seismic Noise <msnoise(a)mailman-as.oma.be>
> Subject: Re: [MSNoise] advice on processing database subsets
> Message-ID: <572704A0.1030608(a)anu.edu.au>
> Content-Type: text/plain; charset="UTF-8"; format=flowed
>
> Hi again,
>
> Thanks for the comments. Here's what I did to set just a singe day for
> processing, so that I can test the parameter settings. I looked into the
> API code and needed to import from msnoise_table_def.py, but it seems to
> work OK:
>
> from msnoise.api import connect
> from msnoise_table_def import Job
>
> set_day = '2013-10-14'
> jobtype = 'CC'
> session = connect()
> jobs_set = session.query(Job).filter(Job.jobtype ==
> jobtype).filter(Job.day == set_day)
> jobs_set.update({Job.flag: 'T'})
> jobs_unset = session.query(Job).filter(Job.jobtype ==
> jobtype).filter(Job.day != set_day)
> jobs_unset.update({Job.flag: 'D'})
> session.commit()
>
> So now I have a jobs table with just the day I want set to 'T'. I hoped
> I was ready to try 'msnoise compute_cc', but it seems to want me to set
> Filters first. This appears to be referring to the MCWS filter
> parameters? I am a little surprised since I thought MCWS would come
> later, and don't understand how the CC computation would be dependent on
> the MCWS filter parameters.
>
> To tell you the truth, at the moment I am more interested in using the
> msnoise cross-correlations as input to a tomography algorithm, rather
> than in MCWS itself. In any case I am keen to look at the CC to see that
> they make sense, before i move to anything else.
>
> Would it be possible to please advise on whether there is a way to do
> compute_cc without having to worry about the MCWS parameters?
>
> Thanks,
>
> - Phil
>
>
> Thomas Lecocq wrote:
>> Hi guys,
>>
>> Yeah, I have been thinking about a "benchmark" mode for quite a number
>> of weeks, i.e. since I tested a first run of PWS in order to compare
>> the final dv/v ; to compare properly I have to test quite a number of
>> parameters.
>>
>> My current idea is to run a set of possible parameters, for different
>> steps. This would lead to a large number of branches in a large tree,
>> but it would definitively be quite interesting.
>>
>> I am really not in favor of duplicating the database, rather to
>> create a "config" file with an caller script, to set/change/
>> parameters... Theoretically, the API should let you do all the
>> actions. The only thing that would be a little trickier is to
>> store/reuse the results of each step in order to compare them. For
>> info, using the "shutil" module you can move/copy files easily.
>>
>> Let's keep brainstorming on that and see how it goes !
>>
>> Cheers
>>
>> Thomas
>>
>> On 01/05/2016 16:52, Lukas Preiswerk wrote:
>>> Hi all
>>>
>>> I was in a similar situation as Phil, and I used (1). It?s not
>>> straightforward to copy the database and make msnoise work again in a
>>> new
>>> directory. But it?s definitely possible.
>>> I actually think it would be a nice addition to msnoise to not only
>>> make an
>>> option for multiple filters, but also for multiple other parameters
>>> (window
>>> lengths, overlaps, windsorizing, etc.). This would really help in the
>>> first
>>> ?exploratory phase? to find out what is the best way to process your
>>> dataset.
>>> What do you think of this idea? Practically I would implement it by
>>> moving
>>> these parameters (window length etc.) to the filter parameters, and
>>> treat
>>> it in the same way as an additional filter. As far as I understand the
>>> code, this wouldn?t require many adaptions?
>>>
>>> Lukas
>>>
>>>
>>>
>>> 2016-05-01 11:35 GMT+02:00 Thomas Lecocq <Thomas.Lecocq(a)seismology.be>:
>>>
>>>> Hi Phil,
>>>>
>>>> I'd say (3) would be better indeed. You can script msnoise using the
>>>> api.
>>>> If you need to change params in the config, you can alternatively
>>>> use the
>>>> "msnoise config --set name=value" command.
>>>>
>>>> Please keep me updated of your progresses & tests !
>>>>
>>>> Thomas
>>>>
>>>>
>>>>
>>>> On 01/05/2016 10:34, Phil Cummins wrote:
>>>>
>>>>> Hi again,
>>>>>
>>>>> As some of you may recall, I'm just getting started with msnoise. I
>>>>> have
>>>>> a large database and have managed to get my station and data
>>>>> availability
>>>>> tables populated.
>>>>> At this point, rather than running through the whole database,
>>>>> processing
>>>>> it with parameters I hope might work, I'd rather process small
>>>>> subsets,
>>>>> e.g. 1 day at a time, to experiment with window lengths, overlaps,
>>>>> etc., to
>>>>> find what seems optimal. My question is, what's the best way to
>>>>> process
>>>>> subsets of my database?
>>>>> It seems to me I have several options:
>>>>> (1) Make separate databases for each subset I want to test,
>>>>> and run
>>>>> through the workflow on each
>>>>> (2) Set start and end times appropriate for my subset, re-scan
>>>>> and
>>>>> run through the workflow.
>>>>> (3) Populate the jobs table, and write a script to activate
>>>>> only the
>>>>> jobs I want and not the others.
>>>>> I want to a fair bit of testing using different parameters before I
>>>>> run
>>>>> through the whole thing, so I think (3) may be best. But any advice
>>>>> would
>>>>> be appreciated.
>>>>> Regards,
>>>>>
>>>>> - Phil
>>>>> _______________________________________________
>>>>> MSNoise mailing list
>>>>> MSNoise(a)mailman-as.oma.be
>>>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>>>
>>>> _______________________________________________
>>>> MSNoise mailing list
>>>> MSNoise(a)mailman-as.oma.be
>>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>>
>>> _______________________________________________
>>> MSNoise mailing list
>>> MSNoise(a)mailman-as.oma.be
>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>
>> _______________________________________________
>> MSNoise mailing list
>> MSNoise(a)mailman-as.oma.be
>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 2 May 2016 09:44:38 +0200
> From: Thomas Lecocq <thomas.lecocq(a)oma.be>
> To: msnoise(a)mailman-as.oma.be
> Subject: Re: [MSNoise] advice on processing database subsets
> Message-ID: <57270566.40602(a)oma.be>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Hi Phil,
>
> Nice piece of code.
>
> Currently, a Filter is defined for AND cc step (whitening) AND mwcs
> step. So, you'll have to define the filter's bounds for the CC step,
> while keeping the MWCS values to 0 , e.g. setting "low", "high",
> "rms_threshold=0", "used=true" and you'll be good to go !
>
> Thomas
>
> Le 02/05/2016 09:41, Phil Cummins a ?crit :
>> Hi again,
>>
>> Thanks for the comments. Here's what I did to set just a singe day for
>> processing, so that I can test the parameter settings. I looked into
>> the API code and needed to import from msnoise_table_def.py, but it
>> seems to work OK:
>>
>> from msnoise.api import connect
>> from msnoise_table_def import Job
>>
>> set_day = '2013-10-14'
>> jobtype = 'CC'
>> session = connect()
>> jobs_set = session.query(Job).filter(Job.jobtype ==
>> jobtype).filter(Job.day == set_day)
>> jobs_set.update({Job.flag: 'T'})
>> jobs_unset = session.query(Job).filter(Job.jobtype ==
>> jobtype).filter(Job.day != set_day)
>> jobs_unset.update({Job.flag: 'D'})
>> session.commit()
>>
>> So now I have a jobs table with just the day I want set to 'T'. I
>> hoped I was ready to try 'msnoise compute_cc', but it seems to want me
>> to set Filters first. This appears to be referring to the MCWS filter
>> parameters? I am a little surprised since I thought MCWS would come
>> later, and don't understand how the CC computation would be dependent
>> on the MCWS filter parameters.
>>
>> To tell you the truth, at the moment I am more interested in using the
>> msnoise cross-correlations as input to a tomography algorithm, rather
>> than in MCWS itself. In any case I am keen to look at the CC to see
>> that they make sense, before i move to anything else.
>>
>> Would it be possible to please advise on whether there is a way to do
>> compute_cc without having to worry about the MCWS parameters?
>>
>> Thanks,
>>
>> - Phil
>>
>>
>> Thomas Lecocq wrote:
>>> Hi guys,
>>>
>>> Yeah, I have been thinking about a "benchmark" mode for quite a
>>> number of weeks, i.e. since I tested a first run of PWS in order to
>>> compare the final dv/v ; to compare properly I have to test quite a
>>> number of parameters.
>>>
>>> My current idea is to run a set of possible parameters, for different
>>> steps. This would lead to a large number of branches in a large tree,
>>> but it would definitively be quite interesting.
>>>
>>> I am really not in favor of duplicating the database, rather to
>>> create a "config" file with an caller script, to set/change/
>>> parameters... Theoretically, the API should let you do all the
>>> actions. The only thing that would be a little trickier is to
>>> store/reuse the results of each step in order to compare them. For
>>> info, using the "shutil" module you can move/copy files easily.
>>>
>>> Let's keep brainstorming on that and see how it goes !
>>>
>>> Cheers
>>>
>>> Thomas
>>>
>>> On 01/05/2016 16:52, Lukas Preiswerk wrote:
>>>> Hi all
>>>>
>>>> I was in a similar situation as Phil, and I used (1). It?s not
>>>> straightforward to copy the database and make msnoise work again in
>>>> a new
>>>> directory. But it?s definitely possible.
>>>> I actually think it would be a nice addition to msnoise to not only
>>>> make an
>>>> option for multiple filters, but also for multiple other parameters
>>>> (window
>>>> lengths, overlaps, windsorizing, etc.). This would really help in
>>>> the first
>>>> ?exploratory phase? to find out what is the best way to process your
>>>> dataset.
>>>> What do you think of this idea? Practically I would implement it by
>>>> moving
>>>> these parameters (window length etc.) to the filter parameters, and
>>>> treat
>>>> it in the same way as an additional filter. As far as I understand the
>>>> code, this wouldn?t require many adaptions?
>>>>
>>>> Lukas
>>>>
>>>>
>>>>
>>>> 2016-05-01 11:35 GMT+02:00 Thomas Lecocq <Thomas.Lecocq(a)seismology.be>:
>>>>
>>>>> Hi Phil,
>>>>>
>>>>> I'd say (3) would be better indeed. You can script msnoise using
>>>>> the api.
>>>>> If you need to change params in the config, you can alternatively
>>>>> use the
>>>>> "msnoise config --set name=value" command.
>>>>>
>>>>> Please keep me updated of your progresses & tests !
>>>>>
>>>>> Thomas
>>>>>
>>>>>
>>>>>
>>>>> On 01/05/2016 10:34, Phil Cummins wrote:
>>>>>
>>>>>> Hi again,
>>>>>>
>>>>>> As some of you may recall, I'm just getting started with msnoise.
>>>>>> I have
>>>>>> a large database and have managed to get my station and data
>>>>>> availability
>>>>>> tables populated.
>>>>>> At this point, rather than running through the whole database,
>>>>>> processing
>>>>>> it with parameters I hope might work, I'd rather process small
>>>>>> subsets,
>>>>>> e.g. 1 day at a time, to experiment with window lengths, overlaps,
>>>>>> etc., to
>>>>>> find what seems optimal. My question is, what's the best way to
>>>>>> process
>>>>>> subsets of my database?
>>>>>> It seems to me I have several options:
>>>>>> (1) Make separate databases for each subset I want to test,
>>>>>> and run
>>>>>> through the workflow on each
>>>>>> (2) Set start and end times appropriate for my subset,
>>>>>> re-scan and
>>>>>> run through the workflow.
>>>>>> (3) Populate the jobs table, and write a script to activate
>>>>>> only the
>>>>>> jobs I want and not the others.
>>>>>> I want to a fair bit of testing using different parameters before
>>>>>> I run
>>>>>> through the whole thing, so I think (3) may be best. But any
>>>>>> advice would
>>>>>> be appreciated.
>>>>>> Regards,
>>>>>>
>>>>>> - Phil
>>>>>> _______________________________________________
>>>>>> MSNoise mailing list
>>>>>> MSNoise(a)mailman-as.oma.be
>>>>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>>>>
>>>>> _______________________________________________
>>>>> MSNoise mailing list
>>>>> MSNoise(a)mailman-as.oma.be
>>>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>>>
>>>> _______________________________________________
>>>> MSNoise mailing list
>>>> MSNoise(a)mailman-as.oma.be
>>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>>>
>>> _______________________________________________
>>> MSNoise mailing list
>>> MSNoise(a)mailman-as.oma.be
>>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>> _______________________________________________
>> MSNoise mailing list
>> MSNoise(a)mailman-as.oma.be
>> http://mailman-as.oma.be/mailman/listinfo/msnoise
>
>
>
> ------------------------------
>
> _______________________________________________
> MSNoise mailing list
> MSNoise(a)mailman-as.oma.be
> http://mailman-as.oma.be/mailman/listinfo/msnoise
>
>
> End of MSNoise Digest, Vol 27, Issue 2
> **************************************
Hi again,
As some of you may recall, I'm just getting started with msnoise. I have
a large database and have managed to get my station and data
availability tables populated.
At this point, rather than running through the whole database,
processing it with parameters I hope might work, I'd rather process
small subsets, e.g. 1 day at a time, to experiment with window lengths,
overlaps, etc., to find what seems optimal. My question is, what's the
best way to process subsets of my database?
It seems to me I have several options:
(1) Make separate databases for each subset I want to test, and run
through the workflow on each
(2) Set start and end times appropriate for my subset, re-scan and
run through the workflow.
(3) Populate the jobs table, and write a script to activate only
the jobs I want and not the others.
I want to a fair bit of testing using different parameters before I run
through the whole thing, so I think (3) may be best. But any advice
would be appreciated.
Regards,
- Phil
Hi,
I'm new to msnoise, and was unable to find a searchable archive for this
list(?), apologies if the question has been asked before.
I want to populate my station table, and msnoise has scanned my sds
database and identified all the stations. However, it has only filled
out the name fields, and I need at a minimum the locations, and would
like to do the instruments as well.
So I would like to find a way to populate the stations table using a
python script. Presumably one would use pymysql? I have never used
pymysql, and was wondering if maybe someone had such a script. If you
do, could you please send it to me?
(actually, I want to do it this way in any case, since in future my sds
database will have more stations than I want to use with msnoise).
Thanks,
- Phil
--
Phil Cummins
Prof. Natural Hazards
Research School of Earth Sciences
Australian National University
Hi all,
Just over a year after the last major release (MSNoise 1.3) we are proud
to announce the new MSNoise 1.4. It is a major release, with a massive
amount of work since the last one: in GitHub numbers , it’s over 125
commits and about 5500 new lines of code and documentation added!
MSNoise 1.4 introduces four major new features : a new ultra-intuitive
web-based admin interface, the support for plugins and extensions, the
phase weighted stack and the instrument response removal. It also brings
the possibility to parallel/thread process the cross-correlation and the
MWCS steps. MSNoise is now “tested” automatically on Linux (thanks to
TravisCI) & Windows (thanks to Appveyor), for Python versions 2.7, 3.4
and 3.5. Yes, MSNoise is Python 3 compatible !!! See the full Release
Notes: http://msnoise.org/doc/releasenotes/msnoise-1.4.html
This version has benefited from outputs/ideas/pull requests/questions
from several users/friends:
Carmelo Sammarco
Esteban Chaves
Lion Krisher
Tobias Megies
Clare Donaldson
Aurélien Mordret
Raphaël De Plaen
Lukas E. Preiswerk
all others (don’t be mad )
Thanks to all for using MSNoise, and please, let us know why/how you use
it (and please cite it!)!
To date, we found/are aware of 12 publications using MSNoise ! That’s
the best validation of our project ever ! See the full list on the
MSNoise website.
Thomas & Corentin
PS: if you use MSNoise for your research and prepare publications,
please consider citing it:
Lecocq, T., C. Caudron, et F. Brenguier (2014), MSNoise, a Python
Package for Monitoring Seismic Velocity Changes Using Ambient Seismic
Noise, Seismological Research Letters, 85(3), 715‑726,
doi:10.1785/0220130073.
--
Dr. Thomas Lecocq
Geologist - Seismologist
Seismology - Gravimetry
Royal Observatory of Belgium
*
* * * * *
* * * *
---------
http://www.seismology.behttp://twitter.com/#!/Seismologie_behttps://www.facebook.com/seismologie.be
--
Dr. Thomas Lecocq
Geologist - Seismologist
Seismology - Gravimetry
Royal Observatory of Belgium
*
* * * * *
* * * *
---------
http://www.seismology.behttp://twitter.com/#!/Seismologie_behttps://www.facebook.com/seismologie.be
TIDES (MS)Noise training – Vienna 2016
The aim of this Training School is to demonstrate the powers and
limitations of using ambient seismic noise for seismological studies
using continuous data, for example using ambient seismic noise
cross-correlation for computing dv/v, surface wave tomography or
microseismic activity tracking. The MSNoise package will be introduced
and then used by participants on provided demo data. New developments to
MSNoise will be presented too: an easier configuration interface, an
improved pluggability and the demonstration of external plugins
currently in development (e.g. PPSD, TOMO or SARA). The workshop will
occur during the weekend following the EGU 2016 General Assembly and
will be ran over three days. During the first day, we will make sure
everyone has the right software and provide a refresher course on
Python/ObsPy. The second day will contain review presentations related
to "the seismic noise" followed by a practical using MSNoise. On the
third day, other techniques like Amplitude Ratio and Tomography will be
presented before a MSNoise hacking session.
The planned schedule will be:
Saturday 23 April
·13.00 – 14.00: Welcome
·14.00 – 15.00: Participant's computer preparation "Install party"
·15.00 – 17.00: Refresher course to Python/ObsPy (version 1.0 & new
features)
Sunday 24 April
·09.00 – 09.30: Welcome
·09.30 – 11.00: Introduction on Noise – What is Noise? How do we use it
? Cross-Correlation? dv/v?
·11.00 – 12.00: MSNoise general introduction
·12.00 – 13.00: Lunch Break
·13.00 – 17.00: MSNoise practical
Monday 25 April
·08.00 – 08.30: Welcome
·08.30 – 10.00: Noise-based studies (SARA, TOMO)
·10.00 – 11.45: Hacking / Interacting with MSNoise
·11.45 – 12.00: Conclusion
Trainers
·Dr Thomas Lecocq – Seismologist at the Royal Observatory of Belgium –
Author of MSNoise
·Dr Corentin Caudron – Post-Doc Volcano-Seismologist at the University
of Cambridge (UK)
·Dr Aurélien Mordret – Post-Doc Seismologist at the Massachusetts
Institute of Technology (USA)
Budget
This training is *fully supported* by *TIDES*, an Action supported by
the *COST Association*, aiming at structuring the EU seismological
community to enable development of data-intensive, time-dependent
techniques for monitoring Earth active processes (e.g., earthquakes,
volcanic eruptions, landslides, glacial earthquakes) as well as oil/gas
reservoirs.
*_This _**Training School*
<http://tides-cost.eu/assets/docs/Training_School_1.html>*_is aimed
mostly at TIDES-funded participants_*. TIDES-participants (trainees) can
benefit from a Training-School Grant (fixed amount, _overall
contribution_ for their travel, accommodation and meal expenses), please
check this page carefully:
http://www.tides-cost.eu/assets/docs/Training_School_2.html !!
Registration
Please register for the Training using the following form:
http://msnoise.org/tides2016wien
Within a few days, your name should appear on that list:
http://msnoise.org/tides-msnoise-workshop-vienna-2016/
More Information
The training will take place in "Vienna's Hottest museum": "Brennpunkt°
- Museum der Heizkultur Wien":
<https://www.wien.gv.at/kultur/museen/brennpunkt/museum.html>https://www.wien.gv.at/kultur/museen/brennpunkt/museum.html
Information on the TIDES Cost Action: http://tides-cost.eu/
Information on MSNoise: http://msnoise.org
Should you have questions, please contact tides.cost(a)gmail.com
<mailto:tides.cost@gmail.com>or Thomas.Lecocq(a)seismology.be
<mailto:Thomas.Lecocq@seismology.be>.
*Final Note*
Participants coming to Vienna only for this training might be interested
to arrive a day earlier and attend the EGU meeting, specifically the
"/SM4.1 Ambient seismic noise techniques: sources, monitoring, and
imaging/" programmed on Friday 22 April in Room 1.85. and convened by
Céline Hadziioannou, Martin Schimmel, Chris Bean, Ulrich Wegler,
Christoph Sens-Schönfelder and Eric Larose.
<http://meetingorganizer.copernicus.org/EGU2016/session/20403>http://meetingorganizer.copernicus.org/EGU2016/session/20403
--
Dr. Thomas Lecocq
Geologist - Seismologist
Seismology - Gravimetry
Royal Observatory of Belgium
*
* * * * *
* * * *
---------
http://www.seismology.behttp://twitter.com/#!/Seismologie_behttps://www.facebook.com/seismologie.be
Hello all,
I would like to suggest an option in MSNoise to change the order of operations from Bandpass -> 1Bit Norm -> Whitening to Bandpass -> Whitening -> 1 Bit Norm in the s03_compute_cc.py routine.
I have observed that for datasets where there is significant correlation in the higher frequencies by a cultural source (for example machinery near two stations), but the frequencies of interest are lower, the resulting correlation will be completely dominated by the high frequency signal if 1Bit Normalization occurs prior to whitening.
This could lead inexperienced users with noisy datasets to think that there are no low frequency signals propagating across their array. By doing 1Bit normalization prior to Whitening, it will no longer be possible to filter the cross correlations to further narrow the target frequencies, which is not the case if Whitening occurs before 1 Bit.
A workaround for the current version of MSNoise is to change the original bandpass to a frequency range that excludes the problem frequencies, but this requires prior knowledge of what the cross correlations will look like for a wide frequency band.
Does anyone have any personal examples for datasets that support or contrast these observations?
Thanks,
-Francesco
Francesco Civilini, M.S.
Geophysics PhD Candidate
Victoria University of Wellington
Cotton Building Room 505
Wellington 6012, New Zealand
(027) 868-5939
Francesco.Civilini(a)vuw.ac.nz<mailto:Francesco.Civilini@vuw.ac.nz>
www.francescocivilini.com<mailto:Francesco.Civilini@vuw.ac.nz>
Hi all,
Sorry to disturb all, it may be very simple problem but I am very new on
msnoise and mysql so it seems I need some help.
I created an MySQL database.
I did "msnoise install" than "msnoise config" and I received the
following error message as below;
.....
File "/usr/lib64/python2.7/site-packages/matplotlib/pyplot.py", line
98, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show =
pylab_setup()
File
"/usr/lib64/python2.7/site-packages/matplotlib/backends/__init__.py",
line 28, in pylab_setup
globals(),locals(),[backend_name],0)
ImportError: No module named backend_qt4agg
when I tried to easy_install this module I also had the following error;
[root@localhost site-packages]# easy_install backend_qt4agg
Searching for backend-qt4agg
Reading https://pypi.python.org/simple/backend_qt4agg/
Reading https://pypi.python.org/simple/backend-qt4agg/
Couldn't find index page for 'backend_qt4agg' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.python.org/simple/
No local packages or download links found for backend-qt4agg
error: Could not find suitable distribution for
Requirement.parse('backend-qt4agg')
I appreciate if anybody could help.
Thanks & Regards,
--
AHU KOMEC MUTLU
Bogazici University Kandilli Observatory & E.R.I.
Cengelkoy / ISTANBUL
Phone: +(90) 216 516 32 16
-------------------------
_ _
Hello Thom,
Apa kabar?
MSNoise is great! I've succeeded to install it and happy to see the
results. But sometimes I got problems during the compute_cc process. It is
because some of the data have different sampling rate and/or gaps. These
force MSNoise stops. I could erase these "non-uniform data" manually in the
database and then reset CC and restart the compute_cc. But for a huge
amount of files, it is not convenient. Do you think is it possible to add
several lines in your MSNoise script so that when the data is not uniform,
MSNoise will skip it without stopping the process, but continue to compute
only the good data. Otherwise, I should fix my seismic data conversion. Or
do you have any better suggestions? Below is the error codes I sometime
found.
Traceback (most recent call last):
File "c:\anaconda\lib\runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "c:\anaconda\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Anaconda\Scripts\msnoise.exe\__main__.py", line 9, in <module>
File "c:\anaconda\lib\site-packages\msnoise\scripts\msnoise.py", line
393, in run
cli(obj={})
File "c:\anaconda\lib\site-packages\click-5.1-py2.7.egg\click\core.py",
line 700, in __call__
return self.main(*args, **kwargs)
File "c:\anaconda\lib\site-packages\click-5.1-py2.7.egg\click\core.py",
line 680, in main
rv = self.invoke(ctx)
File "c:\anaconda\lib\site-packages\click-5.1-py2.7.egg\click\core.py",
line 1027, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "c:\anaconda\lib\site-packages\click-5.1-py2.7.egg\click\core.py",
line 873, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\anaconda\lib\site-packages\click-5.1-py2.7.egg\click\core.py",
line 508, in invoke
return callback(*args, **kwargs)
File "c:\anaconda\lib\site-packages\msnoise\scripts\msnoise.py", line
174, in compute_cc
main()
File "c:\anaconda\lib\site-packages\msnoise\s03compute_cc.py", line 271,
in main
basetime, tramef_Z = preprocess(db, stations, comps, goal_day, params,
tramef_Z)
File "c:\anaconda\lib\site-packages\msnoise\s03compute_cc.py", line 130,
in preprocess
stream[gap[0]] = stream[gap[0]].__add__(stream[gap[1]], method=0,
fill_value="interpolate")
File
"c:\anaconda\lib\site-packages\obspy-0.10.2-py2.7-win-amd64.egg\obspy\core\trace.py",
line 68
1, in __add__
raise TypeError("Sampling rate differs")
TypeError: Sampling rate differs
Cheers,
-----
Dr. Devy Kamil Syahbana
Head of Volcano Monitoring Section for Eastern Region of Indonesia
Ministry of Energy and Mineral Resources
Geological Agency
Center for Volcanology and Geological Hazard Mitigation
Jalan Diponegoro N°57
Bandung 40122
Indonesia
http://vsi.esdm.go.id
<https://www.avast.com/lp-safe-emailing?utm_medium=email&utm_source=link&utm…>
This
email has been sent from a virus-free computer protected by Avast.
www.avast.com
<https://www.avast.com/lp-safe-emailing?utm_medium=email&utm_source=link&utm…>
<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Hi,
Le 19/11/2015 07:41, Yoones Vaezi a écrit :
> Hi,
>
> Another problem I am facing is about the weights used for the MWCS
> program. The weights are supposed to be calculated using equation A5
> in Clarke et al. 2011 which is
> Inline image 2.
> But the following two lines in your Python code do not follow this
> equation:
>
> w = 1.0 / (1.0 / (coh[indRange]**2) - 1.0)
> w[coh[indRange] >= 0.99] = 1.0 / (1.0 / 0.9801 - 1.0)
>
>
> Could you please let me know where the values 0.99 and 0.9801 are
> coming from and what is the reason behind having these two lines in
> the code?
from the original code of Clarke, so, no, no idea precisely. I assume it
was to avoid infinite and zero division in the Fortran code.
Thomas
>
> Thank you very much for your time and consideration,
>
> Regards,
>
> Yoones
> __________________________________________