Skip to main content
Search type Search syntax
Tags [tag]
Exact "words here"
Author user:1234
user:me (yours)
Score score:3 (3+)
score:0 (none)
Answers answers:3 (3+)
answers:0 (none)
isaccepted:yes
hasaccepted:no
inquestion:1234
Views views:250
Code code:"if (foo != bar)"
Sections title:apples
body:"apples oranges"
URL url:"*.example.com"
Saves in:saves
Status closed:yes
duplicate:no
migrated:no
wiki:no
Types is:question
is:answer
Exclude -[tag]
-apples
For more details on advanced search visit our help page
Results tagged with
Search options not deleted user 62846

Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation.

0 votes
Accepted

What can NLI do for a chatbot?

Check out this paper: Dialogue Natural Language Inference. Here is the link: https://www.aclweb.org/anthology/P19-1363
CoderOnly's user avatar
  • 721
0 votes
Accepted

For text match problem, what is the different between question-question match and question-a...

You can refer to the paper "A Deep Look into Neural Ranking Models for Information Retrieval" to get more discussion about different matching tasks.
CoderOnly's user avatar
  • 721
-1 votes
2 answers
168 views

What can NLI do for a chatbot?

Natural Language Inference(NLI) is the task of predicting the labels(entailment, contradiction, and neutral,) for sentence pairs. People invent a lot of deep model to solve this problem. But I can n …
CoderOnly's user avatar
  • 721
1 vote
2 answers
83 views

For text match problem, what is the different between question-question match and question-a...

I know question-question match is a text similarity problem. What about question-answer or question-doc match? It is used in information retrieval. question-question match is indeed text similarity. …
CoderOnly's user avatar
  • 721
0 votes
1 answer
488 views

What is the position embedding code?

https://github.com/google-research/bert/blob/master/modeling.py#L491-L520 The code of BERT is one of the implementation. But it is not what I need. I search a lot but can not judge. But where is th …
CoderOnly's user avatar
  • 721
1 vote
1 answer
71 views

Why TREC set two task: document ranking and passage ranking

TREC is https://microsoft.github.io/TREC-2019-Deep-Learning/ I am new to text retrieval. Still can not understand why set the two similar task. Thank you very much.
CoderOnly's user avatar
  • 721
0 votes
1 answer
141 views

Is it possible to create a rule-based algorithm to compute the relevance score of question-a...

In information retrieval or question answering system, we use TD-IDF or BM25 to compute the similarity score of question-question pair as the baseline or coarse ranking for deep learning. In communit …
CoderOnly's user avatar
  • 721
1 vote
0 answers
23 views

In ChatGPT, The difference of using reward to guide policy vs using the dataset of reward to...

In ChatGPT, What are the differences of using reward to guide policy vs using the dataset of reward to train policy?
CoderOnly's user avatar
  • 721
0 votes
Accepted

What is the position embedding code?

Here is the code implemented by TensorFlow: Implementation 1: Implementation 2:
CoderOnly's user avatar
  • 721
0 votes
1 answer
394 views

What is the reason for the speedup of transformer-xl?

The inference speed of transformer-xl is faster than transformer. Why? If state reuse is the reason, so it is compared by two 32seq_len + state-reuse vs one 64seq_len + no-state-reuse?
CoderOnly's user avatar
  • 721
0 votes
Accepted

What is the reason for the speedup of transformer-xl?

For vanilla Transformer language models (Al Rfou et al), you process [1 2 3 4], predict 5, process [2 3 4 5], predict 6, and repeat. For a Transformer-XL language model, you process [1 2 3 4], predic …
CoderOnly's user avatar
  • 721
0 votes
Accepted

Are there some research papers about text-to-set generation?

Check out these papers below and google keywords: Multi-label Classification. X-BERT: eXtreme Multi-label Text Classification with BERT HAXMLNet: Hierarchical Attention Network for Extreme Multi-Labe …
CoderOnly's user avatar
  • 721
0 votes
1 answer
73 views

Are there some research papers about text-to-set generation?

I have googled but find no results. Text-to-(word)set generation or sequence-to-(token)set generation. For example, input a text and then output the tags for this text: 'Peter is studying English' …
CoderOnly's user avatar
  • 721
1 vote
1 answer
536 views

Based on transformer, how to improve the text generation results?

If I do not pretrain the text generation model like BART, how to improve the result based on transformer like tensor2tensor? What are the improvement ideas for transformer in text generation task?
CoderOnly's user avatar
  • 721
1 vote
1 answer
263 views

In ChatGPT, what is the difference between Reinforcement-Learning-from-Human-Feedback and Da... [closed]

Reinforcement-Learning-from-Human-Feedback vs TrainingData-Label-Again.
CoderOnly's user avatar
  • 721

15 30 50 per page