欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 新闻 > 焦点 > AF3 AtomAttentionDecoder类源码解读

AF3 AtomAttentionDecoder类源码解读

2025/1/23 12:49:55 来源:https://blog.csdn.net/qq_27390023/article/details/145310084  浏览:    关键词:AF3 AtomAttentionDecoder类源码解读

AlphaFold3的AtomAttentionDecoder类旨在从每个 token 的表示扩展到每个原子的表示,同时通过交叉注意力机制对原子及其对关系进行建模。这种设计可以在生物分子建模中捕获复杂的原子级别交互。

源代码:

class AtomAttentionDecoder(nn.Module):"""AtomAttentionDecoder that broadcasts per-token activations to per-atom activations."""def __init__(self,c_token: int,c_atom: int = 128,c_atompair: int = 16,no_blocks: int = 3,no_heads: int = 8,dropout=0.0,n_queries: int = 32,n_keys: int = 128,clear_cache_between_blocks: bool = False):"""Initialize the AtomAttentionDecoder module.Args:c_token:The number of channels for the token representation.c_atom:The number of channels for the atom representation. Defaults to 128.c_atompair:The number of channels for the atom pair representation. Defaults to 16.no_blocks:Number of blocks.no_heads:Number of parallel attention heads. Note that c_atom will be split across no_heads(i.e. each head will have dimension c_atom // no_heads).dropout:Dropout probability on attn_output_weights. Default: 0.0 (no dropout).n_queries:The size of the atom window. Defaults to 32.n_keys:Number of atoms each atom attends to in local sequence space. Defaults to 128.clear_cache_between_blocks:Whether to clear CUDA's GPU memory cache between blocks of thestack. Slows down each block but can reduce fragmentation"""super().__init__()self.c_token = c_tokenself.c_atom = c_atomself.c_atompair = c_atompairself.num_blocks = no_blocksself.num_heads = no_headsself.dropout = dropoutself.n_queries = n_queriesself.n_keys = n_keysself.clear_cache_bet

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com