-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[???] encode_memory() dirtiness #15
Comments
Hi @kotee4ko , thanks for your interest in this project! I apologize for the hacky implementation of the modeling part. Hope the following answers can help:
The This function is required because Transformers have a fixed vocab size. Instead of directly passing
Since
Could you elaborate more on this question?
A variable can be on register or stack. In order to distinguish register
We have two Transformer encoders Feel free to follow up if you have more questions! |
@qibinc , Sir, I can't get few things.
Where is the code, which process this encoded tensors? Wow, this is real hardcore, Sir. |
Sir, I need to refactor code to be able to launch it on very specific AMD GPU. Thanks. def tmaxu(t1, t2):
tm1, tm2 = t1.unique().numel(), t2.unique().numel()
# print(f"T1 (len = {t1.numel()}, uniq: {tm1} \n{t1}\n"
# f"T2 (len = {t2.numel()}, uniq: {tm2} \n{t2}\n"
# )
ret = max(tm1, tm2, 2)
return ret
def _shared_epoch_end(self, outputs, prefix):
final_ret = {}
if self.retype:
ret = self._shared_epoch_end_task(outputs, prefix, "retype")
final_ret = {**final_ret, **ret}
if self.rename:
ret = self._shared_epoch_end_task(outputs, prefix, "rename")
final_ret = {**final_ret, **ret}
if self.retype and self.rename:
# Evaluate rename accuracy on correctly retyped samples
retype_preds = torch.cat([x[f"retype_preds"] for x in outputs])
retype_targets = torch.cat([x[f"retype_targets"] for x in outputs])
rename_preds = torch.cat([x[f"rename_preds"] for x in outputs])
rename_targets = torch.cat([x[f"rename_targets"] for x in outputs])
if (retype_preds == retype_targets).sum() > 0:
binary_mask = retype_preds == retype_targets
p_t = rename_preds[binary_mask]
t_t = rename_targets[binary_mask]
self.log(
f"{prefix}_rename_on_correct_retype_acc",
accuracy(
p_t,
t_t,
task='multiclass',
num_classes=tmaxu(p_t, t_t)
),
)
return final_ret |
@qibinc Thanks One more question: how should I average name and type predictions? |
Wow, what a dirty trick!
|
Hello. Thanks for you're kindness to share such a good project.
Could you please explain me why does we need
encode_memory
this function, and why does it attempt to compare integers with '' ?
I can't get send of appending int token to array of int tokens instead of int token.
and second question is about this one:
what and why is 1030 constant do?
And in general, why we define tokens as that:
but using as this:
Sorry, if my questions in too much, I specialize on system programming, and math with ML is a hobby.
Forward Thanks =)
@pcyin
@qibinc
@jlacomis
The text was updated successfully, but these errors were encountered: