SCM

Forum: biomod2 package is now available !

Posted by: damien georges
Date: 2012-07-26 13:56
Summary: biomod2 package is now available !
Project: BioMod

Content:

Dear BIOMOD-users,

You were eagerly waiting for it, and we are happy to say that the new version of BIOMOD called biomod2 is now online on R-Forge.

Although we kept the same modelling philosophy than the former version (which we will still maintain for a while), we have made crucial changes. biomod2 is now fully object-oriented and made for running on a single species only (see vignette MultiSpeciesModelling for multi-species modelling at once). For advanced BIOMOD users, the new functions might be a bit disturbing at the beginning. Then, you will see that this new version is much more advanced and practical than the former ones. Among the novelties, the addition of MAXENT in the modelling techniques, a large range of evaluation metrics, a more refined definition of ensemble modelling and ensemble forecasting, the possibility to give presence-only data and environmental rasters to biomod2 and let it extract pseudo-absence data directly.

We have created several vignettes for you to get use to this new version and a figure explaining the different ways of giving data to BIOMOD.

Please bear in mind that R-Forge is a development platform, it means that this new package would experience repeated updating the next couple of weeks (correcting bugs, adding documentation, adding functionalities) so think about updating the package before each new study you will do.

Last but not least, all comments are welcome! If you find a bug, if you think some documentation points are unclear, if you think about new functionalities that may be useful, just let us know ASAP.

We count on you to help finalizing this new version to a very nice tool. We will then release it to CRAN by the end of July. Please remember to add your code, R-version, OS and BIOMOD-version every time you report a bug or a mistake in the vignette or help files.

Hoping you will enjoy this new version of BIOMOD.

With our best wishes,

Damien & Wilfried

Latest News

biomod2 is now devel on github

damien georges - 2020-03-02 17:08 -

biomod2 package is now available !

damien georges - 2012-07-26 13:56 -
...

 

Monitor Forum | Start New Thread Start New Thread
RE: Evaluating models on new evaluation dataset [ Reply ]
By: Steve Tulowiecki on 2013-12-10 15:07
[forum:40177]
Damien,

After reading your comments, and doing some thinking, it now makes sense. Thank you so much!

Sincerely,
Steve

RE: Evaluating models on new evaluation dataset [ Reply ]
By: damien georges on 2013-12-03 14:38
[forum:40143]
Dear Steve,

Some ensemble models (e.g. weighted mean, committee averaging) are dependent from evaluation metrics because they depend on score and/or threshold optimised for a given metric that is why you have several version of ensemble models (TSS and ROC based).
After, each model you have should be evaluated according to whatever evaluation metric you want.. That is why each ensemble model have both ROC and TSS score..
Does it help?

For the second point, if you give a new dataset to evaluate function, evaluation scores will be calculated according yo this dataset.

Hope that helps,
Damien


RE: Evaluating models on new evaluation dataset [ Reply ]
By: Steve Tulowiecki on 2013-12-02 19:27
[forum:40142]
Damien,

Thanks for the reply. I am still confused as to why there is a set of "ROC" and "TSS" columns that contain values, for both the "ROC" and "TSS" rows.

Perhaps this is related... are there "stored," optimized ROC and TSS values from the built models that are used to calculate what the ROC and TSS values are with new data, when using the evaluate() function? Or does the evaluate() function find new, optimal thresholds for ROC and TSS using the new data specified in the evaluate() function?

Thanks again,
Steve

RE: Evaluating models on new evaluation dataset [ Reply ]
By: damien georges on 2013-12-02 15:36
[forum:40140]
Dear Steve,

In fact the scores you obtain for the EMmean are exactly the same because there is no influence of evaluation metric in this ensemble model (just a mean of all models). However, Emca is evaluation metric dependent because the threshold use to transform continuous predictions into binaries is the one that optimised a given evaluation metrics (here ROC or TSS). That implies that the ensemble models are not exactly the same so their predictive performances are not the same..

Hope that helps,
Damien.

RE: Evaluating models on new evaluation dataset [ Reply ]
By: Steve Tulowiecki on 2013-11-29 20:15
[forum:40134]

example.xlsx (99) downloads
Hello Damien,

I am evaluating an ensemble model in biomod2, but I am having difficulty interpreting a table that is created using one of the biomod2 functions.

I am evaluating an ensemble model, that is created using the mean of the predicted values from five different modeling algorithms (i.e. GAM, GBM, GLM, MARS, and RF), as well as using committee averaging. The models are evaluated using the 'ROC' and 'TSS' statistics.

When I use the get_evaluations() function to evaluate the ensemble model, both 'ROC' and 'TSS' show up in row names and column names. It is difficult to interpret the results, because I do not know exactly which evaluation statistic is which.

An example spreadsheet is attached. 'cas.den' is the abbreviation for the species being modeled. As you can see, the first eight columns (green) do not exactly match the last eight columns, and there is both an 'ROC' and 'TSS' row, making me a bit unsure as to what the values mean, exactly.

Thank you for considering this issue. Hopefully it is just confusion on my part!

Thanks,
Steve T.

RE: Evaluating models on new evaluation dataset [ Reply ]
By: Steve Tulowiecki on 2013-11-21 16:29
[forum:40091]
Damien,

Following your suggestions, I have successfully evaluated my models upon a new evaluation dataset.

Thank you for the fast reply!

Steve

RE: Evaluating models on new evaluation dataset [ Reply ]
By: damien georges on 2013-11-20 08:05
[forum:40086]
Dear Steve,

As you mentioned, the function evaluate() is there to post evaluate models built within biomod2.
Look at evaluate help function example to see how it works.
?evaluate

I guess the main point (and maybe where you encounter issue) is the data table creation (raster stack not supported yet).

" Must be a dataset with first column containing the observed data for your species. The folowing columns must be the explanatory variables at observed points. Be sure that colnames of your dataset are the name of your species then the names of variables used for building models at previous steps. "

Concerning the report on help file.. As you mentioned, the detail section of evaluate function help file is not the right one.. I will update it now.. But all other sections seems to be ok.

Hope that helps,
Damien.

Evaluating models on new evaluation dataset [ Reply ]
By: Steve Tulowiecki on 2013-11-20 00:36
[forum:40085]
Hello Damien, Wilfried, and others,

I am using biomod2 extensively to create species distribution models for my dissertation research. Biomod2 has been incredibly useful.

However, I had some questions, centered upon the following: is there a way to evaluate models upon new/different evaluation datasets, without recreating another set of models?

To do so, I attempted to use the "evaluate()" function in the latest version of biomod2 (3.1-23), but I was unsuccessful. Also, it appears that in the biomod2.pdf presentation, the description for the evaluate() function is the same as the description for the "variables_importance()" function.

Is it possible to evaluate models on different evaluation datasets? Is the evaluate() function the proper function to do so? If so, is there updated/corrected documentation for this function? Or, is there a different function to accomplish this task?

Thank you very much for your help.

Sincerely,
Steve Tulowiecki

Thanks to:
Vienna University of Economics and Business Powered By FusionForge