FWIW, I think this is a great post. But I really don't like the way people are treating it like a "knockout blow" against AI 2027. It's healthy debate!
I'm sure many people feel the same way. But wouldn't that just make that observation even stronger—people care about animal welfare so much that they'd like to go even further than in-ovo testing?
Agree with your first point. For the second point, I felt like I had to add some artifice because otherwise the morally correct choice in almost all situations would seem to obviously be "ask humanity and let it choose for itself"! Which is correct, but not very interesting.
(In any case, I'm not actually that interested in these particular moral puzzles, I have other purposes in asking...)
Subscription confirmed!
Confirmed!
(PS I love pedantic emails)
I first tried it with my RSS reader, but I also get an error if i just try to load that URL in a web browser. (Any browser.)
Confirmed! Though this seems to mostly work, I also seem to get some kind of parse error. You might want to check if there's a problem.
Ah, I see, very nice. I wonder if it might make sense to declare the dimensions that are supposed to match once and for all when you wrap the function?
E.g. perhaps you could write:
@new_wrap('m, n, m n->')
def my_op(x,y,a):
return y @ jnp.linalg.solve(a,x)
to declare the matching dimensions of the wrapped function and then call it with something like
Z = my_op('i [:], j [:], i j [: :]->i j', X, Y, A)
It's a small thing but it seems like the matching declaration should be done "once and for all"?
(On the other hand, I guess there might be cases where the way things match depend on the arguments...)
Edit: Or perhaps if you declare the matching shapes when you wrap the function you wouldn't actually need to use brackets at all, and could just call it as:
Z = my_op('i :, j :, i j : :->i j', X, Y, A)
?
OK, I gave it a shot on the initial example in my post:
import einx
from jax import numpy as jnp
import numpy as onp
import jax
X = jnp.array(onp.random.randn(20,5))
Y = jnp.array(onp.random.randn(30,5))
A = jnp.array(onp.random.randn(20,30,5,5))
def my_op(x,y,a):
print(x.shape)
return y @ jnp.linalg.solve(a,x)
Z = einx.vmap("i [m], j [n], i j [m n]->i j", X, Y, A, op=my_op)
Aaaaand, it seemed to work the first time! Well done!
I am a little confused though, because if I use "i [a], j [b], i j [c d]->i j"
it still seems to work, so maybe I don't actually 100% understand that bracket notation after all...
Two more thoughts:
- I added a link.
- You gotta add
def wrap(fun): partial(vmap, op=fun)
for easy wrapping. :)
Hey, thanks for pointing this out! I quite like the bracket notation for indicating axes that operations should be applied "to" vs. "over".
One question I have—is it possible for me as a user to define my own function and then apply it with einx-type notation?
Thanks, the one problem with that is that you have to use dumpy.wrap
if you ever create a function that uses loops and then you want to call it inside another loop. But I don't see any way around that.
I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.