You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I currently use CppAD and was considering porting my codebase to use your library. I make use of gradient, jacobian, and hessian computations in cppad. The optimisation framework I'm using also makes use of the sparsity pattern that CppAD can compute for both jacobian and hessian matrices. Is there anything that I'm currently using from CppAD as per the above that's not currently provided in your library before I begin the porting work? Thanks
The text was updated successfully, but these errors were encountered:
Hi, while you should find that Adept is much faster than CppAD for the pure AD, it is missing some of the functionality you require. Firstly it only does first-order differentiation, so can't compute the Hessian of arbitrary functions, although as explained in section 4.1 of the documentation (https://www.met.reading.ac.uk/clouds/adept/adept_documentation.pdf), if your cost function is quadratic then you can compute a good approximation to the Hessian from the Jacobian, which Adept can calculate. Secondly there is no sparse array functionality, nor a way for Adept to report the sparsity pattern for Jacobian matrices.
Hi,
I currently use CppAD and was considering porting my codebase to use your library. I make use of gradient, jacobian, and hessian computations in cppad. The optimisation framework I'm using also makes use of the sparsity pattern that CppAD can compute for both jacobian and hessian matrices. Is there anything that I'm currently using from CppAD as per the above that's not currently provided in your library before I begin the porting work? Thanks
The text was updated successfully, but these errors were encountered: