-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Haste RNN support #8
Comments
I was able to do this : import numpy as np train_x = np.random.rand(500,40,20) inputs = tf.keras.Input(shape=(train_x.shape[1], train_x.shape[2])) x, state = haste1(inputs, training=True) model = tf.keras.Model(inputs=inputs, outputs=outputs) print(model.summary()) opt = tf.keras.optimizers.Adam(learning_rate=0.01) model_hist = model.fit(train_x, train_y, epochs=10, batch_size=32, verbose=1) is it possible to add this layer in place of pure "GRU" ? |
I will look into it. However, since kgcnn is fully based on keras, I am not fully sure if we sould add more external dependencies apart from keras. But maybe as a layer module is should be fine. |
I really want to test this special layer if it's possible. Maybe you can an exception for this one. |
Thanks, I am still trying to get haste installed. But I think this error is not from us. We must post this error to haste, since I think there must be a I am not sure that there is a great advantage using haste Cells. I think there is an advantage using the final haste GRU. But because the recurrent step requires some attention pooling over the nodes (for graph networks) we are stuck with the GRUCells. However, I will think about this, maybe I did make a mistake in my logic/implementation. |
if you want to open an issue in Haste repo. Go ahead
|
Okay, thanks, I opened and issue in haste. |
Do you see a way to replace the keras GRU by the Haste_tf GRU version from here https://github.com/lmnt-com/haste/blob/master/docs/tf/haste_tf.md ?
there are lot of advantage in this package for regularization at cuda level etc...
unfortunately I don't see a way to plug the haste_tf at keras level lmnt-com/haste#33 ?
your help would be appreciate
The text was updated successfully, but these errors were encountered: