Stability of BRAINSFit registration

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Stability of BRAINSFit registration

Matthew Anthony Mouawad

Hello everyone, I have another question concerning the stability of brainsfit’s registration.

 

What lead me to ask this was I ran a registration on an image set one day and then about a week later ran it again after I noticed some weird artifacts. I got different outputs where many of the “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it. Below I am giving some more details about the nature of my registration in case it’s relevant but my main question is: If I were to have a set of images (in this case DCE-MRI images) and I registered them on day 1 and then again on day 10 (for example), is it guaranteed that I would get the same output (or at least mostly the same) or could it be that on day 1 it arrives at some local minima and day 10 it avoids it somehow. I know that the sampling strategy is random but I am localizing the registration to an ROI and using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6 voxels, then the percent sampling I use is 1e6/16515072 – in a previous email thread I was told that even though it randomly samples, it supposedly doesn’t sample the same ROI twice) – so the randomness can’t come from there. Hopefully that question is clear enough. Below is some extra information in case it is pertinent.

 

Also don’t know if it is relevant but when I run brainsfit either through my script or through the GUI I also get an error:

WARNING: In ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimizerv4.cxx, line 116

LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not support scaling. All scales are set to one.

 

--- Additional info

  • Registering DCE-MRI images using a script that loops through the moving images (post contrast images) and registers to the pre-contrast. I can attach script if necessary though it’s pretty messy and inefficient since I am not a programmer and have very little experience with Python
  • I am using brainsfit. The percent sampling is specified above with an ROI. I can attach a log if necessary.
  • There is no bulk transform i.e., only using bsplines
  • For options that I want as the default I do not specify them in my script. Ex. I don’t specify things like “max iterations.” It seems the default parameters are still set anyways from the log.

Any other info needed let me know.

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Steve Pieper-2
Hi Matthew - 

This is probably a good topic to bring up directly with the BRAINSTools team [1].  I'm not sure if there's a way to set the random number seed so you get identical results with every run.  Probably though if your results are very different from run to run it probably means the solution is unstable and you need to try different parameters.

Best of luck,
Steve


On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad <[hidden email]> wrote:

Hello everyone, I have another question concerning the stability of brainsfit’s registration.

 

What lead me to ask this was I ran a registration on an image set one day and then about a week later ran it again after I noticed some weird artifacts. I got different outputs where many of the “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it. Below I am giving some more details about the nature of my registration in case it’s relevant but my main question is: If I were to have a set of images (in this case DCE-MRI images) and I registered them on day 1 and then again on day 10 (for example), is it guaranteed that I would get the same output (or at least mostly the same) or could it be that on day 1 it arrives at some local minima and day 10 it avoids it somehow. I know that the sampling strategy is random but I am localizing the registration to an ROI and using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6 voxels, then the percent sampling I use is 1e6/16515072 – in a previous email thread I was told that even though it randomly samples, it supposedly doesn’t sample the same ROI twice) – so the randomness can’t come from there. Hopefully that question is clear enough. Below is some extra information in case it is pertinent.

 

Also don’t know if it is relevant but when I run brainsfit either through my script or through the GUI I also get an error:

WARNING: In ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimizerv4.cxx, line 116

LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not support scaling. All scales are set to one.

 

--- Additional info

  • Registering DCE-MRI images using a script that loops through the moving images (post contrast images) and registers to the pre-contrast. I can attach script if necessary though it’s pretty messy and inefficient since I am not a programmer and have very little experience with Python
  • I am using brainsfit. The percent sampling is specified above with an ROI. I can attach a log if necessary.
  • There is no bulk transform i.e., only using bsplines
  • For options that I want as the default I do not specify them in my script. Ex. I don’t specify things like “max iterations.” It seems the default parameters are still set anyways from the log.

Any other info needed let me know.

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Matthew Anthony Mouawad
In reply to this post by Matthew Anthony Mouawad

Okay, I may try and contact them. I had another question I am not sure if you can answer. If I define an ROI that has x amount of voxels and my entire image has y voxels. If I then set the ROI as the region I want to do the registration over but want to use, say, 2x voxels, will it use duplicate voxels in the metric estimation? I am assuming the voxels it does use are localized to the ROI… so if this was the case, let’s say I use decide to just use x, is it possible that the random sampling of the voxels could come from the same one on two different samples?

 

From: Steve Pieper [mailto:[hidden email]]
Sent: Tuesday, February 21, 2017 2:25 PM
To: Matthew Anthony Mouawad <[hidden email]>
Cc: SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

 

Hi Matthew - 

 

This is probably a good topic to bring up directly with the BRAINSTools team [1].  I'm not sure if there's a way to set the random number seed so you get identical results with every run.  Probably though if your results are very different from run to run it probably means the solution is unstable and you need to try different parameters.

 

Best of luck,

Steve

 

 

On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad <[hidden email]> wrote:

Hello everyone, I have another question concerning the stability of brainsfit’s registration.

 

What lead me to ask this was I ran a registration on an image set one day and then about a week later ran it again after I noticed some weird artifacts. I got different outputs where many of the “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it. Below I am giving some more details about the nature of my registration in case it’s relevant but my main question is: If I were to have a set of images (in this case DCE-MRI images) and I registered them on day 1 and then again on day 10 (for example), is it guaranteed that I would get the same output (or at least mostly the same) or could it be that on day 1 it arrives at some local minima and day 10 it avoids it somehow. I know that the sampling strategy is random but I am localizing the registration to an ROI and using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6 voxels, then the percent sampling I use is 1e6/16515072 – in a previous email thread I was told that even though it randomly samples, it supposedly doesn’t sample the same ROI twice) – so the randomness can’t come from there. Hopefully that question is clear enough. Below is some extra information in case it is pertinent.

 

Also don’t know if it is relevant but when I run brainsfit either through my script or through the GUI I also get an error:

WARNING: In ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimizerv4.cxx, line 116

LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not support scaling. All scales are set to one.

 

--- Additional info

  • Registering DCE-MRI images using a script that loops through the moving images (post contrast images) and registers to the pre-contrast. I can attach script if necessary though it’s pretty messy and inefficient since I am not a programmer and have very little experience with Python
  • I am using brainsfit. The percent sampling is specified above with an ROI. I can attach a log if necessary.
  • There is no bulk transform i.e., only using bsplines
  • For options that I want as the default I do not specify them in my script. Ex. I don’t specify things like “max iterations.” It seems the default parameters are still set anyways from the log.

Any other info needed let me know.

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Steve Pieper-2
In reply to this post by Matthew Anthony Mouawad
Hmm, I don't know the internals for sure, but if there are x voxels in the roi and you ask for 2x samples you are going to get duplicates.  Or did I not understand the question?

On Tue, Feb 21, 2017 at 2:32 PM, Matthew Anthony Mouawad <[hidden email]> wrote:

Okay, I may try and contact them. I had another question I am not sure if you can answer. If I define an ROI that has x amount of voxels and my entire image has y voxels. If I then set the ROI as the region I want to do the registration over but want to use, say, 2x voxels, will it use duplicate voxels in the metric estimation? I am assuming the voxels it does use are localized to the ROI… so if this was the case, let’s say I use decide to just use x, is it possible that the random sampling of the voxels could come from the same one on two different samples?

 

From: Steve Pieper [mailto:[hidden email]]
Sent: Tuesday, February 21, 2017 2:25 PM
To: Matthew Anthony Mouawad <[hidden email]>
Cc: SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

 

Hi Matthew - 

 

This is probably a good topic to bring up directly with the BRAINSTools team [1].  I'm not sure if there's a way to set the random number seed so you get identical results with every run.  Probably though if your results are very different from run to run it probably means the solution is unstable and you need to try different parameters.

 

Best of luck,

Steve

 

 

On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad <[hidden email]> wrote:

Hello everyone, I have another question concerning the stability of brainsfit’s registration.

 

What lead me to ask this was I ran a registration on an image set one day and then about a week later ran it again after I noticed some weird artifacts. I got different outputs where many of the “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it. Below I am giving some more details about the nature of my registration in case it’s relevant but my main question is: If I were to have a set of images (in this case DCE-MRI images) and I registered them on day 1 and then again on day 10 (for example), is it guaranteed that I would get the same output (or at least mostly the same) or could it be that on day 1 it arrives at some local minima and day 10 it avoids it somehow. I know that the sampling strategy is random but I am localizing the registration to an ROI and using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6 voxels, then the percent sampling I use is 1e6/16515072 – in a previous email thread I was told that even though it randomly samples, it supposedly doesn’t sample the same ROI twice) – so the randomness can’t come from there. Hopefully that question is clear enough. Below is some extra information in case it is pertinent.

 

Also don’t know if it is relevant but when I run brainsfit either through my script or through the GUI I also get an error:

WARNING: In ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimizerv4.cxx, line 116

LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not support scaling. All scales are set to one.

 

--- Additional info

  • Registering DCE-MRI images using a script that loops through the moving images (post contrast images) and registers to the pre-contrast. I can attach script if necessary though it’s pretty messy and inefficient since I am not a programmer and have very little experience with Python
  • I am using brainsfit. The percent sampling is specified above with an ROI. I can attach a log if necessary.
  • There is no bulk transform i.e., only using bsplines
  • For options that I want as the default I do not specify them in my script. Ex. I don’t specify things like “max iterations.” It seems the default parameters are still set anyways from the log.

Any other info needed let me know.

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ

 



_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Andras Lasso-2

--numberOfSamples is deprecated, --samplingPercentage should be used instead. One advantage of percentage is that the value is independent from volume resolution.

 

When “Random” sampling is used, nothing prevents using the same position multiple times. You would need to make a few small changes in BRAINS to make “Regular” option available (that takes every n-th sample).

 

Andras

 

From: slicer-users [mailto:[hidden email]] On Behalf Of Steve Pieper
Sent: February 21, 2017 15:10
To: Matthew Anthony Mouawad <[hidden email]>
Cc: SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

 

Hmm, I don't know the internals for sure, but if there are x voxels in the roi and you ask for 2x samples you are going to get duplicates.  Or did I not understand the question?

 

On Tue, Feb 21, 2017 at 2:32 PM, Matthew Anthony Mouawad <[hidden email]> wrote:

Okay, I may try and contact them. I had another question I am not sure if you can answer. If I define an ROI that has x amount of voxels and my entire image has y voxels. If I then set the ROI as the region I want to do the registration over but want to use, say, 2x voxels, will it use duplicate voxels in the metric estimation? I am assuming the voxels it does use are localized to the ROI… so if this was the case, let’s say I use decide to just use x, is it possible that the random sampling of the voxels could come from the same one on two different samples?

 

From: Steve Pieper [mailto:[hidden email]]
Sent: Tuesday, February 21, 2017 2:25 PM
To: Matthew Anthony Mouawad <[hidden email]>
Cc: SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

 

Hi Matthew - 

 

This is probably a good topic to bring up directly with the BRAINSTools team [1].  I'm not sure if there's a way to set the random number seed so you get identical results with every run.  Probably though if your results are very different from run to run it probably means the solution is unstable and you need to try different parameters.

 

Best of luck,

Steve

 

 

On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad <[hidden email]> wrote:

Hello everyone, I have another question concerning the stability of brainsfit’s registration.

 

What lead me to ask this was I ran a registration on an image set one day and then about a week later ran it again after I noticed some weird artifacts. I got different outputs where many of the “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it. Below I am giving some more details about the nature of my registration in case it’s relevant but my main question is: If I were to have a set of images (in this case DCE-MRI images) and I registered them on day 1 and then again on day 10 (for example), is it guaranteed that I would get the same output (or at least mostly the same) or could it be that on day 1 it arrives at some local minima and day 10 it avoids it somehow. I know that the sampling strategy is random but I am localizing the registration to an ROI and using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6 voxels, then the percent sampling I use is 1e6/16515072 – in a previous email thread I was told that even though it randomly samples, it supposedly doesn’t sample the same ROI twice) – so the randomness can’t come from there. Hopefully that question is clear enough. Below is some extra information in case it is pertinent.

 

Also don’t know if it is relevant but when I run brainsfit either through my script or through the GUI I also get an error:

WARNING: In ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimizerv4.cxx, line 116

LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not support scaling. All scales are set to one.

 

--- Additional info

  • Registering DCE-MRI images using a script that loops through the moving images (post contrast images) and registers to the pre-contrast. I can attach script if necessary though it’s pretty messy and inefficient since I am not a programmer and have very little experience with Python
  • I am using brainsfit. The percent sampling is specified above with an ROI. I can attach a log if necessary.
  • There is no bulk transform i.e., only using bsplines
  • For options that I want as the default I do not specify them in my script. Ex. I don’t specify things like “max iterations.” It seems the default parameters are still set anyways from the log.

Any other info needed let me know.

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ

 

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Andrey Fedorov-2
Matthew, you should be able to specify the seed to increase your
chances of getting a reproducible result in BRAINSFit. However, in my
experience, even with the fixed seed there is some uncertainty in the
way b-spline registration is done in ITK. I could not nail it down,
but I had similar experience that the results would not be identical
across repeat registrations.

On Tue, Feb 21, 2017 at 3:36 PM, Andras Lasso <[hidden email]> wrote:

> --numberOfSamples is deprecated, --samplingPercentage should be used
> instead. One advantage of percentage is that the value is independent from
> volume resolution.
>
>
>
> When “Random” sampling is used, nothing prevents using the same position
> multiple times. You would need to make a few small changes in BRAINS to make
> “Regular” option available (that takes every n-th sample).
>
>
>
> Andras
>
>
>
> From: slicer-users [mailto:[hidden email]] On Behalf
> Of Steve Pieper
> Sent: February 21, 2017 15:10
>
>
> To: Matthew Anthony Mouawad <[hidden email]>
> Cc: SPL Slicer Users <[hidden email]>
> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>
>
>
> Hmm, I don't know the internals for sure, but if there are x voxels in the
> roi and you ask for 2x samples you are going to get duplicates.  Or did I
> not understand the question?
>
>
>
> On Tue, Feb 21, 2017 at 2:32 PM, Matthew Anthony Mouawad <[hidden email]>
> wrote:
>
> Okay, I may try and contact them. I had another question I am not sure if
> you can answer. If I define an ROI that has x amount of voxels and my entire
> image has y voxels. If I then set the ROI as the region I want to do the
> registration over but want to use, say, 2x voxels, will it use duplicate
> voxels in the metric estimation? I am assuming the voxels it does use are
> localized to the ROI… so if this was the case, let’s say I use decide to
> just use x, is it possible that the random sampling of the voxels could come
> from the same one on two different samples?
>
>
>
> From: Steve Pieper [mailto:[hidden email]]
> Sent: Tuesday, February 21, 2017 2:25 PM
> To: Matthew Anthony Mouawad <[hidden email]>
> Cc: SPL Slicer Users <[hidden email]>
> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>
>
>
> Hi Matthew -
>
>
>
> This is probably a good topic to bring up directly with the BRAINSTools team
> [1].  I'm not sure if there's a way to set the random number seed so you get
> identical results with every run.  Probably though if your results are very
> different from run to run it probably means the solution is unstable and you
> need to try different parameters.
>
>
>
> Best of luck,
>
> Steve
>
>
>
> [1] https://github.com/BRAINSia/BRAINSTools
>
>
>
> On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad <[hidden email]>
> wrote:
>
> Hello everyone, I have another question concerning the stability of
> brainsfit’s registration.
>
>
>
> What lead me to ask this was I ran a registration on an image set one day
> and then about a week later ran it again after I noticed some weird
> artifacts. I got different outputs where many of the “barrelling” and “pin
> cushioning” effects from the bspline registration were gone when I reran it.
> Below I am giving some more details about the nature of my registration in
> case it’s relevant but my main question is: If I were to have a set of
> images (in this case DCE-MRI images) and I registered them on day 1 and then
> again on day 10 (for example), is it guaranteed that I would get the same
> output (or at least mostly the same) or could it be that on day 1 it arrives
> at some local minima and day 10 it avoids it somehow. I know that the
> sampling strategy is random but I am localizing the registration to an ROI
> and using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072
> voxels and my ROI is 1e6 voxels, then the percent sampling I use is
> 1e6/16515072 – in a previous email thread I was told that even though it
> randomly samples, it supposedly doesn’t sample the same ROI twice) – so the
> randomness can’t come from there. Hopefully that question is clear enough.
> Below is some extra information in case it is pertinent.
>
>
>
> Also don’t know if it is relevant but when I run brainsfit either through my
> script or through the GUI I also get an error:
>
> WARNING: In
> ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimizerv4.cxx,
> line 116
>
> LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not support
> scaling. All scales are set to one.
>
>
>
> --- Additional info
>
> Registering DCE-MRI images using a script that loops through the moving
> images (post contrast images) and registers to the pre-contrast. I can
> attach script if necessary though it’s pretty messy and inefficient since I
> am not a programmer and have very little experience with Python
> I am using brainsfit. The percent sampling is specified above with an ROI. I
> can attach a log if necessary.
> There is no bulk transform i.e., only using bsplines
> For options that I want as the default I do not specify them in my script.
> Ex. I don’t specify things like “max iterations.” It seems the default
> parameters are still set anyways from the log.
>
> Any other info needed let me know.
>
>
>
>
> _______________________________________________
> slicer-users mailing list
> [hidden email]
> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
> To unsubscribe: send email to [hidden email] with
> unsubscribe as the subject
> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
>
>
>
>
>
>
> _______________________________________________
> slicer-users mailing list
> [hidden email]
> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
> To unsubscribe: send email to [hidden email] with
> unsubscribe as the subject
> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Matthew Anthony Mouawad
In reply to this post by Andras Lasso-2
I appreciate the response. That makes a lot of sense. I haven't looked a ton into it but what parameters in your experience are relevant to avoiding local minima/increasing reproducibility? Would it be worth it to increase the #bins or #match points? Or is this area a bit undefined?

-----Original Message-----
From: Andrey Fedorov [mailto:[hidden email]]
Sent: Tuesday, February 21, 2017 4:31 PM
To: Andras Lasso <[hidden email]>
Cc: Steve Pieper <[hidden email]>; Matthew Anthony Mouawad <[hidden email]>; SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

Matthew, you should be able to specify the seed to increase your chances of getting a reproducible result in BRAINSFit. However, in my experience, even with the fixed seed there is some uncertainty in the way b-spline registration is done in ITK. I could not nail it down, but I had similar experience that the results would not be identical across repeat registrations.

On Tue, Feb 21, 2017 at 3:36 PM, Andras Lasso <[hidden email]> wrote:

> --numberOfSamples is deprecated, --samplingPercentage should be used
> instead. One advantage of percentage is that the value is independent
> from volume resolution.
>
>
>
> When “Random” sampling is used, nothing prevents using the same
> position multiple times. You would need to make a few small changes in
> BRAINS to make “Regular” option available (that takes every n-th sample).
>
>
>
> Andras
>
>
>
> From: slicer-users [mailto:[hidden email]] On
> Behalf Of Steve Pieper
> Sent: February 21, 2017 15:10
>
>
> To: Matthew Anthony Mouawad <[hidden email]>
> Cc: SPL Slicer Users <[hidden email]>
> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>
>
>
> Hmm, I don't know the internals for sure, but if there are x voxels in
> the roi and you ask for 2x samples you are going to get duplicates.  
> Or did I not understand the question?
>
>
>
> On Tue, Feb 21, 2017 at 2:32 PM, Matthew Anthony Mouawad
> <[hidden email]>
> wrote:
>
> Okay, I may try and contact them. I had another question I am not sure
> if you can answer. If I define an ROI that has x amount of voxels and
> my entire image has y voxels. If I then set the ROI as the region I
> want to do the registration over but want to use, say, 2x voxels, will
> it use duplicate voxels in the metric estimation? I am assuming the
> voxels it does use are localized to the ROI… so if this was the case,
> let’s say I use decide to just use x, is it possible that the random
> sampling of the voxels could come from the same one on two different samples?
>
>
>
> From: Steve Pieper [mailto:[hidden email]]
> Sent: Tuesday, February 21, 2017 2:25 PM
> To: Matthew Anthony Mouawad <[hidden email]>
> Cc: SPL Slicer Users <[hidden email]>
> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>
>
>
> Hi Matthew -
>
>
>
> This is probably a good topic to bring up directly with the
> BRAINSTools team [1].  I'm not sure if there's a way to set the random
> number seed so you get identical results with every run.  Probably
> though if your results are very different from run to run it probably
> means the solution is unstable and you need to try different parameters.
>
>
>
> Best of luck,
>
> Steve
>
>
>
> [1] https://github.com/BRAINSia/BRAINSTools
>
>
>
> On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad
> <[hidden email]>
> wrote:
>
> Hello everyone, I have another question concerning the stability of
> brainsfit’s registration.
>
>
>
> What lead me to ask this was I ran a registration on an image set one
> day and then about a week later ran it again after I noticed some
> weird artifacts. I got different outputs where many of the
> “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it.
> Below I am giving some more details about the nature of my
> registration in case it’s relevant but my main question is: If I were
> to have a set of images (in this case DCE-MRI images) and I registered
> them on day 1 and then again on day 10 (for example), is it guaranteed
> that I would get the same output (or at least mostly the same) or
> could it be that on day 1 it arrives at some local minima and day 10
> it avoids it somehow. I know that the sampling strategy is random but
> I am localizing the registration to an ROI and using 100% of that ROI
> (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6
> voxels, then the percent sampling I use is
> 1e6/16515072 – in a previous email thread I was told that even though
> it randomly samples, it supposedly doesn’t sample the same ROI twice)
> – so the randomness can’t come from there. Hopefully that question is clear enough.
> Below is some extra information in case it is pertinent.
>
>
>
> Also don’t know if it is relevant but when I run brainsfit either
> through my script or through the GUI I also get an error:
>
> WARNING: In
> ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimi
> zerv4.cxx,
> line 116
>
> LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not
> support scaling. All scales are set to one.
>
>
>
> --- Additional info
>
> Registering DCE-MRI images using a script that loops through the
> moving images (post contrast images) and registers to the
> pre-contrast. I can attach script if necessary though it’s pretty
> messy and inefficient since I am not a programmer and have very little
> experience with Python I am using brainsfit. The percent sampling is
> specified above with an ROI. I can attach a log if necessary.
> There is no bulk transform i.e., only using bsplines For options that
> I want as the default I do not specify them in my script.
> Ex. I don’t specify things like “max iterations.” It seems the default
> parameters are still set anyways from the log.
>
> Any other info needed let me know.
>
>
>
>
> _______________________________________________
> slicer-users mailing list
> [hidden email]
> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
> To unsubscribe: send email to [hidden email]
> with unsubscribe as the subject
> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
>
>
>
>
>
>
> _______________________________________________
> slicer-users mailing list
> [hidden email]
> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
> To unsubscribe: send email to [hidden email]
> with unsubscribe as the subject
> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Andrey Fedorov-2
In reply to this post by Andras Lasso-2
> what parameters in your experience are relevant to avoiding local minima/increasing reproducibility?

These are separate questions. You should fine tune your number of
samples and other parameters based on your registration problem. This
you should do by experimenting, observing convergence of the
optimization process, balancing the computation time to your
requirements.

The second issue (reproducibility) is more tricky, since it probably
has to do with the implementation of registration in ITK. I think the
first question is whether there is some source of randomness beyond
the sampling seed. You should try asking BRAINSFit developers (I am
not sure they monitor this Slicer list), or post to the ITK mailing
list. I don't have the answer.

On Tue, Feb 21, 2017 at 4:35 PM, Matthew Anthony Mouawad
<[hidden email]> wrote:

> I appreciate the response. That makes a lot of sense. I haven't looked a ton into it but what parameters in your experience are relevant to avoiding local minima/increasing reproducibility? Would it be worth it to increase the #bins or #match points? Or is this area a bit undefined?
>
> -----Original Message-----
> From: Andrey Fedorov [mailto:[hidden email]]
> Sent: Tuesday, February 21, 2017 4:31 PM
> To: Andras Lasso <[hidden email]>
> Cc: Steve Pieper <[hidden email]>; Matthew Anthony Mouawad <[hidden email]>; SPL Slicer Users <[hidden email]>
> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>
> Matthew, you should be able to specify the seed to increase your chances of getting a reproducible result in BRAINSFit. However, in my experience, even with the fixed seed there is some uncertainty in the way b-spline registration is done in ITK. I could not nail it down, but I had similar experience that the results would not be identical across repeat registrations.
>
> On Tue, Feb 21, 2017 at 3:36 PM, Andras Lasso <[hidden email]> wrote:
>> --numberOfSamples is deprecated, --samplingPercentage should be used
>> instead. One advantage of percentage is that the value is independent
>> from volume resolution.
>>
>>
>>
>> When “Random” sampling is used, nothing prevents using the same
>> position multiple times. You would need to make a few small changes in
>> BRAINS to make “Regular” option available (that takes every n-th sample).
>>
>>
>>
>> Andras
>>
>>
>>
>> From: slicer-users [mailto:[hidden email]] On
>> Behalf Of Steve Pieper
>> Sent: February 21, 2017 15:10
>>
>>
>> To: Matthew Anthony Mouawad <[hidden email]>
>> Cc: SPL Slicer Users <[hidden email]>
>> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>>
>>
>>
>> Hmm, I don't know the internals for sure, but if there are x voxels in
>> the roi and you ask for 2x samples you are going to get duplicates.
>> Or did I not understand the question?
>>
>>
>>
>> On Tue, Feb 21, 2017 at 2:32 PM, Matthew Anthony Mouawad
>> <[hidden email]>
>> wrote:
>>
>> Okay, I may try and contact them. I had another question I am not sure
>> if you can answer. If I define an ROI that has x amount of voxels and
>> my entire image has y voxels. If I then set the ROI as the region I
>> want to do the registration over but want to use, say, 2x voxels, will
>> it use duplicate voxels in the metric estimation? I am assuming the
>> voxels it does use are localized to the ROI… so if this was the case,
>> let’s say I use decide to just use x, is it possible that the random
>> sampling of the voxels could come from the same one on two different samples?
>>
>>
>>
>> From: Steve Pieper [mailto:[hidden email]]
>> Sent: Tuesday, February 21, 2017 2:25 PM
>> To: Matthew Anthony Mouawad <[hidden email]>
>> Cc: SPL Slicer Users <[hidden email]>
>> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>>
>>
>>
>> Hi Matthew -
>>
>>
>>
>> This is probably a good topic to bring up directly with the
>> BRAINSTools team [1].  I'm not sure if there's a way to set the random
>> number seed so you get identical results with every run.  Probably
>> though if your results are very different from run to run it probably
>> means the solution is unstable and you need to try different parameters.
>>
>>
>>
>> Best of luck,
>>
>> Steve
>>
>>
>>
>> [1] https://github.com/BRAINSia/BRAINSTools
>>
>>
>>
>> On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad
>> <[hidden email]>
>> wrote:
>>
>> Hello everyone, I have another question concerning the stability of
>> brainsfit’s registration.
>>
>>
>>
>> What lead me to ask this was I ran a registration on an image set one
>> day and then about a week later ran it again after I noticed some
>> weird artifacts. I got different outputs where many of the
>> “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it.
>> Below I am giving some more details about the nature of my
>> registration in case it’s relevant but my main question is: If I were
>> to have a set of images (in this case DCE-MRI images) and I registered
>> them on day 1 and then again on day 10 (for example), is it guaranteed
>> that I would get the same output (or at least mostly the same) or
>> could it be that on day 1 it arrives at some local minima and day 10
>> it avoids it somehow. I know that the sampling strategy is random but
>> I am localizing the registration to an ROI and using 100% of that ROI
>> (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6
>> voxels, then the percent sampling I use is
>> 1e6/16515072 – in a previous email thread I was told that even though
>> it randomly samples, it supposedly doesn’t sample the same ROI twice)
>> – so the randomness can’t come from there. Hopefully that question is clear enough.
>> Below is some extra information in case it is pertinent.
>>
>>
>>
>> Also don’t know if it is relevant but when I run brainsfit either
>> through my script or through the GUI I also get an error:
>>
>> WARNING: In
>> ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimi
>> zerv4.cxx,
>> line 116
>>
>> LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not
>> support scaling. All scales are set to one.
>>
>>
>>
>> --- Additional info
>>
>> Registering DCE-MRI images using a script that loops through the
>> moving images (post contrast images) and registers to the
>> pre-contrast. I can attach script if necessary though it’s pretty
>> messy and inefficient since I am not a programmer and have very little
>> experience with Python I am using brainsfit. The percent sampling is
>> specified above with an ROI. I can attach a log if necessary.
>> There is no bulk transform i.e., only using bsplines For options that
>> I want as the default I do not specify them in my script.
>> Ex. I don’t specify things like “max iterations.” It seems the default
>> parameters are still set anyways from the log.
>>
>> Any other info needed let me know.
>>
>>
>>
>>
>> _______________________________________________
>> slicer-users mailing list
>> [hidden email]
>> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
>> To unsubscribe: send email to [hidden email]
>> with unsubscribe as the subject
>> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> slicer-users mailing list
>> [hidden email]
>> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
>> To unsubscribe: send email to [hidden email]
>> with unsubscribe as the subject
>> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Matthew Anthony Mouawad
In reply to this post by Andras Lasso-2
Hm, okay. I'll sit on this for a bit and experiment to see where it takes me. Thank you for your time!

-----Original Message-----
From: Andrey Fedorov [mailto:[hidden email]]
Sent: Tuesday, February 21, 2017 4:44 PM
To: Matthew Anthony Mouawad <[hidden email]>
Cc: Andras Lasso <[hidden email]>; Steve Pieper <[hidden email]>; SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

> what parameters in your experience are relevant to avoiding local minima/increasing reproducibility?

These are separate questions. You should fine tune your number of samples and other parameters based on your registration problem. This you should do by experimenting, observing convergence of the optimization process, balancing the computation time to your requirements.

The second issue (reproducibility) is more tricky, since it probably has to do with the implementation of registration in ITK. I think the first question is whether there is some source of randomness beyond the sampling seed. You should try asking BRAINSFit developers (I am not sure they monitor this Slicer list), or post to the ITK mailing list. I don't have the answer.

On Tue, Feb 21, 2017 at 4:35 PM, Matthew Anthony Mouawad <[hidden email]> wrote:

> I appreciate the response. That makes a lot of sense. I haven't looked a ton into it but what parameters in your experience are relevant to avoiding local minima/increasing reproducibility? Would it be worth it to increase the #bins or #match points? Or is this area a bit undefined?
>
> -----Original Message-----
> From: Andrey Fedorov [mailto:[hidden email]]
> Sent: Tuesday, February 21, 2017 4:31 PM
> To: Andras Lasso <[hidden email]>
> Cc: Steve Pieper <[hidden email]>; Matthew Anthony Mouawad
> <[hidden email]>; SPL Slicer Users <[hidden email]>
> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>
> Matthew, you should be able to specify the seed to increase your chances of getting a reproducible result in BRAINSFit. However, in my experience, even with the fixed seed there is some uncertainty in the way b-spline registration is done in ITK. I could not nail it down, but I had similar experience that the results would not be identical across repeat registrations.
>
> On Tue, Feb 21, 2017 at 3:36 PM, Andras Lasso <[hidden email]> wrote:
>> --numberOfSamples is deprecated, --samplingPercentage should be used
>> instead. One advantage of percentage is that the value is independent
>> from volume resolution.
>>
>>
>>
>> When “Random” sampling is used, nothing prevents using the same
>> position multiple times. You would need to make a few small changes
>> in BRAINS to make “Regular” option available (that takes every n-th sample).
>>
>>
>>
>> Andras
>>
>>
>>
>> From: slicer-users [mailto:[hidden email]] On
>> Behalf Of Steve Pieper
>> Sent: February 21, 2017 15:10
>>
>>
>> To: Matthew Anthony Mouawad <[hidden email]>
>> Cc: SPL Slicer Users <[hidden email]>
>> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>>
>>
>>
>> Hmm, I don't know the internals for sure, but if there are x voxels
>> in the roi and you ask for 2x samples you are going to get duplicates.
>> Or did I not understand the question?
>>
>>
>>
>> On Tue, Feb 21, 2017 at 2:32 PM, Matthew Anthony Mouawad
>> <[hidden email]>
>> wrote:
>>
>> Okay, I may try and contact them. I had another question I am not
>> sure if you can answer. If I define an ROI that has x amount of
>> voxels and my entire image has y voxels. If I then set the ROI as the
>> region I want to do the registration over but want to use, say, 2x
>> voxels, will it use duplicate voxels in the metric estimation? I am
>> assuming the voxels it does use are localized to the ROI… so if this
>> was the case, let’s say I use decide to just use x, is it possible
>> that the random sampling of the voxels could come from the same one on two different samples?
>>
>>
>>
>> From: Steve Pieper [mailto:[hidden email]]
>> Sent: Tuesday, February 21, 2017 2:25 PM
>> To: Matthew Anthony Mouawad <[hidden email]>
>> Cc: SPL Slicer Users <[hidden email]>
>> Subject: Re: [slicer-users] Stability of BRAINSFit registration
>>
>>
>>
>> Hi Matthew -
>>
>>
>>
>> This is probably a good topic to bring up directly with the
>> BRAINSTools team [1].  I'm not sure if there's a way to set the
>> random number seed so you get identical results with every run.  
>> Probably though if your results are very different from run to run it
>> probably means the solution is unstable and you need to try different parameters.
>>
>>
>>
>> Best of luck,
>>
>> Steve
>>
>>
>>
>> [1] https://github.com/BRAINSia/BRAINSTools
>>
>>
>>
>> On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad
>> <[hidden email]>
>> wrote:
>>
>> Hello everyone, I have another question concerning the stability of
>> brainsfit’s registration.
>>
>>
>>
>> What lead me to ask this was I ran a registration on an image set one
>> day and then about a week later ran it again after I noticed some
>> weird artifacts. I got different outputs where many of the
>> “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it.
>> Below I am giving some more details about the nature of my
>> registration in case it’s relevant but my main question is: If I were
>> to have a set of images (in this case DCE-MRI images) and I
>> registered them on day 1 and then again on day 10 (for example), is
>> it guaranteed that I would get the same output (or at least mostly
>> the same) or could it be that on day 1 it arrives at some local
>> minima and day 10 it avoids it somehow. I know that the sampling
>> strategy is random but I am localizing the registration to an ROI and
>> using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072
>> voxels and my ROI is 1e6 voxels, then the percent sampling I use is
>> 1e6/16515072 – in a previous email thread I was told that even though
>> it randomly samples, it supposedly doesn’t sample the same ROI twice)
>> – so the randomness can’t come from there. Hopefully that question is clear enough.
>> Below is some extra information in case it is pertinent.
>>
>>
>>
>> Also don’t know if it is relevant but when I run brainsfit either
>> through my script or through the GUI I also get an error:
>>
>> WARNING: In
>> ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptim
>> i
>> zerv4.cxx,
>> line 116
>>
>> LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not
>> support scaling. All scales are set to one.
>>
>>
>>
>> --- Additional info
>>
>> Registering DCE-MRI images using a script that loops through the
>> moving images (post contrast images) and registers to the
>> pre-contrast. I can attach script if necessary though it’s pretty
>> messy and inefficient since I am not a programmer and have very
>> little experience with Python I am using brainsfit. The percent
>> sampling is specified above with an ROI. I can attach a log if necessary.
>> There is no bulk transform i.e., only using bsplines For options that
>> I want as the default I do not specify them in my script.
>> Ex. I don’t specify things like “max iterations.” It seems the
>> default parameters are still set anyways from the log.
>>
>> Any other info needed let me know.
>>
>>
>>
>>
>> _______________________________________________
>> slicer-users mailing list
>> [hidden email]
>> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
>> To unsubscribe: send email to [hidden email]
>> with unsubscribe as the subject
>> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> slicer-users mailing list
>> [hidden email]
>> http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
>> To unsubscribe: send email to [hidden email]
>> with unsubscribe as the subject
>> http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ
Reply | Threaded
Open this post in threaded view
|

Re: Stability of BRAINSFit registration

Matthew Anthony Mouawad
In reply to this post by Steve Pieper-2
I see. Regular sampling might be something I'd be interested in looking at. 

One other question I thought of related to the whole issue. Currently I was looking through my images and registering them to a fixed image. In the cli command I put "wait for completion = true" so that it would wait for a registration to be done before moving onto the next. When I initially stumbled into this issue I decided to add in addition to the wait a time delay of a few seconds as a just in case to see if that affected anything and it seems to have resulted in a better registration thought that may just be chance. Don't know if you can comment on that if you think that's have any effect. 

Another thought I had is that I also loop through a parameter that is changing ie I do a full registration at a grid size of 18,18,5 and then another full registration at 15,15,5 etc. This results in a large amount of images in the slicer scene as I don't save and close them as I go. Do you think having a large amount of images there would affect anything? Or does slicer put them onto the hard disc space once it's been written to the scene? 

Sorry if these are dumb questions. 


-------- Original message --------
From: Andras Lasso <[hidden email]>
Date: 2017-02-21 15:36 (GMT-05:00)
To: Steve Pieper <[hidden email]>, Matthew Anthony Mouawad <[hidden email]>
Cc: SPL Slicer Users <[hidden email]>
Subject: RE: [slicer-users] Stability of BRAINSFit registration

--numberOfSamples is deprecated, --samplingPercentage should be used instead. One advantage of percentage is that the value is independent from volume resolution.

 

When “Random” sampling is used, nothing prevents using the same position multiple times. You would need to make a few small changes in BRAINS to make “Regular” option available (that takes every n-th sample).

 

Andras

 

From: slicer-users [mailto:[hidden email]] On Behalf Of Steve Pieper
Sent: February 21, 2017 15:10
To: Matthew Anthony Mouawad <[hidden email]>
Cc: SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

 

Hmm, I don't know the internals for sure, but if there are x voxels in the roi and you ask for 2x samples you are going to get duplicates.  Or did I not understand the question?

 

On Tue, Feb 21, 2017 at 2:32 PM, Matthew Anthony Mouawad <[hidden email]> wrote:

Okay, I may try and contact them. I had another question I am not sure if you can answer. If I define an ROI that has x amount of voxels and my entire image has y voxels. If I then set the ROI as the region I want to do the registration over but want to use, say, 2x voxels, will it use duplicate voxels in the metric estimation? I am assuming the voxels it does use are localized to the ROI… so if this was the case, let’s say I use decide to just use x, is it possible that the random sampling of the voxels could come from the same one on two different samples?

 

From: Steve Pieper [mailto:[hidden email]]
Sent: Tuesday, February 21, 2017 2:25 PM
To: Matthew Anthony Mouawad <[hidden email]>
Cc: SPL Slicer Users <[hidden email]>
Subject: Re: [slicer-users] Stability of BRAINSFit registration

 

Hi Matthew - 

 

This is probably a good topic to bring up directly with the BRAINSTools team [1].  I'm not sure if there's a way to set the random number seed so you get identical results with every run.  Probably though if your results are very different from run to run it probably means the solution is unstable and you need to try different parameters.

 

Best of luck,

Steve

 

 

On Tue, Feb 21, 2017 at 10:27 AM, Matthew Anthony Mouawad <[hidden email]> wrote:

Hello everyone, I have another question concerning the stability of brainsfit’s registration.

 

What lead me to ask this was I ran a registration on an image set one day and then about a week later ran it again after I noticed some weird artifacts. I got different outputs where many of the “barrelling” and “pin cushioning” effects from the bspline registration were gone when I reran it. Below I am giving some more details about the nature of my registration in case it’s relevant but my main question is: If I were to have a set of images (in this case DCE-MRI images) and I registered them on day 1 and then again on day 10 (for example), is it guaranteed that I would get the same output (or at least mostly the same) or could it be that on day 1 it arrives at some local minima and day 10 it avoids it somehow. I know that the sampling strategy is random but I am localizing the registration to an ROI and using 100% of that ROI (i.e., if my image is 384x384x112 = 16515072 voxels and my ROI is 1e6 voxels, then the percent sampling I use is 1e6/16515072 – in a previous email thread I was told that even though it randomly samples, it supposedly doesn’t sample the same ROI twice) – so the randomness can’t come from there. Hopefully that question is clear enough. Below is some extra information in case it is pertinent.

 

Also don’t know if it is relevant but when I run brainsfit either through my script or through the GUI I also get an error:

WARNING: In ..\..\..\..\..\ITKv4\Modules\Numerics\Optimizersv4\src\itkLBFGSBOptimizerv4.cxx, line 116

LBFGSBOptimizerv4 (00000000EE382A50): LBFGSB optimizer does not support scaling. All scales are set to one.

 

--- Additional info

  • Registering DCE-MRI images using a script that loops through the moving images (post contrast images) and registers to the pre-contrast. I can attach script if necessary though it’s pretty messy and inefficient since I am not a programmer and have very little experience with Python
  • I am using brainsfit. The percent sampling is specified above with an ROI. I can attach a log if necessary.
  • There is no bulk transform i.e., only using bsplines
  • For options that I want as the default I do not specify them in my script. Ex. I don’t specify things like “max iterations.” It seems the default parameters are still set anyways from the log.

Any other info needed let me know.

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ

 

 


_______________________________________________
slicer-users mailing list
[hidden email]
http://massmail.spl.harvard.edu/mailman/listinfo/slicer-users
To unsubscribe: send email to [hidden email] with unsubscribe as the subject
http://www.slicer.org/slicerWiki/index.php/Documentation/4.3/FAQ