azure-docs-sdk-java/docs-ref-autogen/com.azure.ai.textanalytics.yml

16 строки
35 KiB
YAML

### YamlMime:JavaPackage
uid: "com.azure.ai.textanalytics"
fullName: "com.azure.ai.textanalytics"
name: "com.azure.ai.textanalytics"
summary: "[Azure AI Language Service][] is a cloud-based natural language processing (NLP) service offered by Microsoft Azure.\n\n\n[Azure AI Language Service]: https://learn.microsoft.com/azure/ai-services/language-service"
classes:
- "com.azure.ai.textanalytics.TextAnalyticsAsyncClient"
- "com.azure.ai.textanalytics.TextAnalyticsClient"
- "com.azure.ai.textanalytics.TextAnalyticsClientBuilder"
enums:
- "com.azure.ai.textanalytics.TextAnalyticsServiceVersion"
desc: "[Azure AI Language Service][] is a cloud-based natural language processing (NLP) service offered by Microsoft Azure. It's designed to extract valuable insights and information from text data through various NLP techniques. The service provides a range of capabilities for analyzing text, including sentiment analysis, entity recognition, key phrase extraction, language detection, and more. These capabilities can be leveraged to gain a deeper understanding of textual data, automate processes, and make informed decisions based on the analyzed content.\n\nHere are some of the key features of Azure Text Analytics:\n\n * Sentiment Analysis: This feature determines the sentiment expressed in a piece of text, whether it's positive, negative, or neutral. It's useful for understanding the overall emotional tone of customer reviews, social media posts, and other text-based content.\n * Entity Recognition: Azure AI Language can identify and categorize entities mentioned in the text, such as people, organizations, locations, dates, and more. This is particularly useful for extracting structured information from unstructured text.\n * Key Phrase Extraction: The service can automatically identify and extract key phrases or important terms from a given text. This can help summarize the main topics or subjects discussed in the text.\n * Language Detection: Azure AI Language can detect the language in which the text is written. This is useful for routing content to appropriate language-specific processes or for organizing and categorizing multilingual data.\n * Named Entity Recognition: In addition to identifying entities, the service can categorize them into pre-defined types, such as person names, organization names, locations, dates, and more.\n * Entity Linking: This feature can link recognized entities to external databases or sources of information, enriching the extracted data with additional context.\n * Customizable Models: Azure AI Language allows you to fine-tune and train the service's models with your specific domain or industry terminology, which can enhance the accuracy of entity recognition and sentiment analysis.\n\nThe Azure Text Analytics library is a client library that provides Java developers with a simple and easy-to-use interface for accessing and using the Azure AI Language Service. This library allows developers to can be used to analyze unstructured text for tasks, such as sentiment analysis, entities recognition(PII, Health, Linked, Custom), key phrases extraction, language detection, abstractive and extractive summarizations, single-label and multi-label classifications, and execute multiple actions/operations in a single request.\n\n## Getting Started ##\n\nIn order to interact with the Text Analytics features in Azure AI Language Service, you'll need to create an instance of the Text Analytics Client class. To make this possible you'll need the key credential of the service. Alternatively, you can use AAD authentication via [Azure Identity][] to connect to the service.\n\n1. Azure Key Credential, see <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder#credential(\n com.azure.core.credential.AzureKeyCredential)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"AzureKeyCredential\"></xref>.\n2. Azure Active Directory, see <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder#credential(\n com.azure.core.credential.TokenCredential)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"TokenCredential\"></xref>.\n\n**Sample: Construct Synchronous Text Analytics Client with Azure Key Credential**\n\nThe following code sample demonstrates the creation of a <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsClient\"></xref>, using the <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder\"></xref> to configure it with a key credential.\n\n```java\nTextAnalyticsClient textAnalyticsClient = new TextAnalyticsClientBuilder()\n .credential(new AzureKeyCredential(\"{key}\"))\n .endpoint(\"{endpoint}\")\n .buildClient();\n```\n\n**Sample: Construct Asynchronous Text Analytics Client with Azure Key Credential**\n\nThe following code sample demonstrates the creation of a <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>, using the <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder\"></xref> to configure it with a key credential.\n\n```java\nTextAnalyticsAsyncClient textAnalyticsAsyncClient = new TextAnalyticsClientBuilder()\n .credential(new AzureKeyCredential(\"{key}\"))\n .endpoint(\"{endpoint}\")\n .buildAsyncClient();\n```\n\n**Note:** See methods in client level class below to explore all features that library provides.\n\n\n--------------------\n\n## Extract information ##\n\nText Analytics client can be use Natural Language Understanding (NLU) to extract information from unstructured text. For example, identify key phrases or Personally Identifiable, etc. Below you can look at the samples on how to use it.\n\n### Key Phrases Extraction ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.extractKeyPhrases(java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"extractKeyPhrases\"></xref> method can be used to extract key phrases, which returns a list of strings denoting the key phrases in the document.\n\n```java\nKeyPhrasesCollection extractedKeyPhrases =\n textAnalyticsClient.extractKeyPhrases(\"My cat might need to see a veterinarian.\");\n for (String keyPhrase : extractedKeyPhrases) {\n System.out.printf(\"%s.%n\", keyPhrase);\n }\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Named Entities Recognition(NER): Prebuilt Model ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.recognizeEntities(java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"recognizeEntities\"></xref> method can be used to recognize entities, which returns a list of general categorized entities in the provided document.\n\n```java\nCategorizedEntityCollection recognizeEntitiesResult =\n textAnalyticsClient.recognizeEntities(\"Satya Nadella is the CEO of Microsoft\");\n for (CategorizedEntity entity : recognizeEntitiesResult) {\n System.out.printf(\"Recognized entity: %s, entity category: %s, confidence score: %f.%n\",\n entity.getText(), entity.getCategory(), entity.getConfidenceScore());\n }\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Custom Named Entities Recognition(NER): Custom Model ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient#beginRecognizeCustomEntities(\n java.lang.Iterable, java.lang.String, java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsClient#beginRecognizeCustomEntities(\n java.lang.Iterable, java.lang.String, java.lang.String)\"></xref> method can be used to recognize custom entities, which returns a list of custom entities for the provided list of [String][].\n\n```java\nList<String> documents = new ArrayList<>();\n for (int i = 0; i < 3; i++) {\n documents.add(\n \"A recent report by the Government Accountability Office (GAO) found that the dramatic increase \"\n + \"in oil and natural gas development on federal lands over the past six years has stretched the\"\n + \" staff of the BLM to a point that it has been unable to meet its environmental protection \"\n + \"responsibilities.\"); }\n SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller =\n textAnalyticsClient.beginRecognizeCustomEntities(documents, \"{project_name}\", \"{deployment_name}\");\n syncPoller.waitForCompletion();\n syncPoller.getFinalResult().forEach(documentsResults -> {\n System.out.printf(\"Project name: %s, deployment name: %s.%n\",\n documentsResults.getProjectName(), documentsResults.getDeploymentName());\n for (RecognizeEntitiesResult documentResult : documentsResults) {\n System.out.println(\"Document ID: \" + documentResult.getId());\n for (CategorizedEntity entity : documentResult.getEntities()) {\n System.out.printf(\n \"\\tText: %s, category: %s, confidence score: %f.%n\",\n entity.getText(), entity.getCategory(), entity.getConfidenceScore());\n }\n }\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Linked Entities Recognition ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.recognizeLinkedEntities(java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"recognizeLinkedEntities\"></xref> method can be used to find linked entities, which returns a list of recognized entities with links to a well-known knowledge base for the provided document.\n\n```java\nString document = \"Old Faithful is a geyser at Yellowstone Park.\";\n System.out.println(\"Linked Entities:\");\n textAnalyticsClient.recognizeLinkedEntities(document).forEach(linkedEntity -> {\n System.out.printf(\"Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n\",\n linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),\n linkedEntity.getDataSource());\n linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(\n \"Matched entity: %s, confidence score: %f.%n\",\n entityMatch.getText(), entityMatch.getConfidenceScore()));\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Personally Identifiable Information(PII) Entities Recognition ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.recognizePiiEntities(java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"recognizePiiEntities\"></xref> method can be used to recognize PII entities, which returns a list of Personally Identifiable Information(PII) entities in the provided document. For a list of supported entity types, check: [this][this 1]\n\n```java\nPiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities(\"My SSN is 859-98-0987\");\n System.out.printf(\"Redacted Text: %s%n\", piiEntityCollection.getRedactedText());\n for (PiiEntity entity : piiEntityCollection) {\n System.out.printf(\n \"Recognized Personally Identifiable Information entity: %s, entity category: %s,\"\n + \" entity subcategory: %s, confidence score: %f.%n\",\n entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore());\n }\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Text Analytics for Health: Prebuilt Model ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.beginAnalyzeHealthcareEntities*\" data-throw-if-not-resolved=\"false\" data-raw-source=\"beginAnalyzeHealthcareEntities\"></xref> method can be used to analyze healthcare entities, entity data sources, and entity relations in a list of [String][].\n\n```java\nList<String> documents = new ArrayList<>();\n for (int i = 0; i < 3; i++) {\n documents.add(\"The patient is a 54-year-old gentleman with a history of progressive angina over \"\n + \"the past several months.\");\n }\n\n SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable>\n syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents);\n\n syncPoller.waitForCompletion();\n AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult();\n\n result.forEach(analyzeHealthcareEntitiesResultCollection -> {\n analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {\n System.out.println(\"document id = \" + healthcareEntitiesResult.getId());\n System.out.println(\"Document entities: \");\n AtomicInteger ct = new AtomicInteger();\n healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {\n System.out.printf(\"\\ti = %d, Text: %s, category: %s, confidence score: %f.%n\",\n ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),\n healthcareEntity.getConfidenceScore());\n\n IterableStream<EntityDataSource> healthcareEntityDataSources =\n healthcareEntity.getDataSources();\n if (healthcareEntityDataSources != null) {\n healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(\n \"\\t\\tEntity ID in data source: %s, data source: %s.%n\",\n healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));\n }\n });\n // Healthcare entity relation groups\n healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {\n System.out.printf(\"\\tRelation type: %s.%n\", entityRelation.getRelationType());\n entityRelation.getRoles().forEach(role -> {\n final HealthcareEntity entity = role.getEntity();\n System.out.printf(\"\\t\\tEntity text: %s, category: %s, role: %s.%n\",\n entity.getText(), entity.getCategory(), role.getName());\n });\n System.out.printf(\"\\tRelation confidence score: %f.%n\",\n entityRelation.getConfidenceScore());\n });\n });\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n\n--------------------\n\n## Summarize text-based content: Document Summarization ##\n\nText Analytics client can use Natural Language Understanding (NLU) to summarize lengthy documents. For example, extractive or abstractive summarization. Below you can look at the samples on how to use it.\n\n### Extractive summarization ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.beginExtractSummary*\" data-throw-if-not-resolved=\"false\" data-raw-source=\"beginExtractSummary\"></xref> method returns a list of extract summaries for the provided list of [String][].\n\nThis method is supported since service API version <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion.V2023_04_01\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion#V2023_04_01\"></xref>.\n\n```java\nList<String> documents = new ArrayList<>();\n for (int i = 0; i < 3; i++) {\n documents.add(\n \"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,\"\n + \" human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI\"\n + \" Cognitive Services, I have been working with a team of amazing scientists and engineers to turn \"\n + \"this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship\"\n + \" among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,\"\n + \" (Y) and multilingual (Z). At the intersection of all three, there\\u2019s magic\\u2014what we call XYZ-code\"\n + \" as illustrated in Figure 1\\u2014a joint representation to create more powerful AI that can speak, hear,\"\n + \" see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term\"\n + \" vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have\"\n + \" pretrained models that can jointly learn representations to support a broad range of downstream\"\n + \" AI tasks, much in the way humans do today. Over the past five years, we have achieved human\"\n + \" performance on benchmarks in conversational speech recognition, machine translation, \"\n + \"conversational question answering, machine reading comprehension, and image captioning. These\"\n + \" five breakthroughs provided us with strong signals toward our more ambitious aspiration to\"\n + \" produce a leap in AI capabilities, achieving multisensory and multilingual learning that \"\n + \"is closer in line with how humans learn and understand. I believe the joint XYZ-code is a \"\n + \"foundational component of this aspiration, if grounded with external knowledge sources in \"\n + \"the downstream AI tasks.\");\n }\n SyncPoller<ExtractiveSummaryOperationDetail, ExtractiveSummaryPagedIterable> syncPoller =\n textAnalyticsClient.beginExtractSummary(documents);\n syncPoller.waitForCompletion();\n syncPoller.getFinalResult().forEach(resultCollection -> {\n for (ExtractiveSummaryResult documentResult : resultCollection) {\n System.out.println(\"\\tExtracted summary sentences:\");\n for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) {\n System.out.printf(\n \"\\t\\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n\",\n extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(),\n extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore());\n }\n }\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Abstractive summarization ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.beginAbstractSummary*\" data-throw-if-not-resolved=\"false\" data-raw-source=\"beginAbstractSummary\"></xref> method returns a list of abstractive summary for the provided list of [String][].\n\nThis method is supported since service API version <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion.V2023_04_01\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion#V2023_04_01\"></xref>.\n\n```java\nList<String> documents = new ArrayList<>();\n for (int i = 0; i < 3; i++) {\n documents.add(\n \"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,\"\n + \" human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI\"\n + \" Cognitive Services, I have been working with a team of amazing scientists and engineers to turn \"\n + \"this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship\"\n + \" among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,\"\n + \" (Y) and multilingual (Z). At the intersection of all three, there\\u2019s magic\\u2014what we call XYZ-code\"\n + \" as illustrated in Figure 1\\u2014a joint representation to create more powerful AI that can speak, hear,\"\n + \" see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term\"\n + \" vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have\"\n + \" pretrained models that can jointly learn representations to support a broad range of downstream\"\n + \" AI tasks, much in the way humans do today. Over the past five years, we have achieved human\"\n + \" performance on benchmarks in conversational speech recognition, machine translation, \"\n + \"conversational question answering, machine reading comprehension, and image captioning. These\"\n + \" five breakthroughs provided us with strong signals toward our more ambitious aspiration to\"\n + \" produce a leap in AI capabilities, achieving multisensory and multilingual learning that \"\n + \"is closer in line with how humans learn and understand. I believe the joint XYZ-code is a \"\n + \"foundational component of this aspiration, if grounded with external knowledge sources in \"\n + \"the downstream AI tasks.\");\n }\n SyncPoller<AbstractiveSummaryOperationDetail, AbstractiveSummaryPagedIterable> syncPoller =\n textAnalyticsClient.beginAbstractSummary(documents);\n syncPoller.waitForCompletion();\n syncPoller.getFinalResult().forEach(resultCollection -> {\n for (AbstractiveSummaryResult documentResult : resultCollection) {\n System.out.println(\"\\tAbstractive summary sentences:\");\n for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {\n System.out.printf(\"\\t\\t Summary text: %s.%n\", summarySentence.getText());\n for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) {\n System.out.printf(\"\\t\\t offset: %d, length: %d%n\",\n abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength());\n }\n }\n }\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n\n--------------------\n\n## Classify Text ##\n\nText Analytics client can use Natural Language Understanding (NLU) to detect the language or classify the sentiment of text you have. For example, language detection, sentiment analysis, or custom text classification. Below you can look at the samples on how to use it.\n\n### Analyze Sentiment and Mine Text for Opinions ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient#analyzeSentiment(java.lang.String, java.lang.String,\n com.azure.ai.textanalytics.models.AnalyzeSentimentOptions)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsClient#analyzeSentiment(java.lang.String, java.lang.String,\n com.azure.ai.textanalytics.models.AnalyzeSentimentOptions)\"></xref> analyzeSentiment\\} method can be used to analyze sentiment on a given input text string, which returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the `includeOpinionMining` of <xref uid=\"com.azure.ai.textanalytics.models.AnalyzeSentimentOptions\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.models.AnalyzeSentimentOptions\"></xref> set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).\n\n```java\nDocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment(\n \"The hotel was dark and unclean.\", \"en\",\n new AnalyzeSentimentOptions().setIncludeOpinionMining(true));\n for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {\n System.out.printf(\"\\tSentence sentiment: %s%n\", sentenceSentiment.getSentiment());\n sentenceSentiment.getOpinions().forEach(opinion -> {\n TargetSentiment targetSentiment = opinion.getTarget();\n System.out.printf(\"\\tTarget sentiment: %s, target text: %s%n\", targetSentiment.getSentiment(),\n targetSentiment.getText());\n for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {\n System.out.printf(\"\\t\\t'%s' sentiment because of \\\"%s\\\". Is the assessment negated: %s.%n\",\n assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated());\n }\n });\n }\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Detect Language ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient.detectLanguage(java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"detectLanguage\"></xref> method returns the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true.\n\nThis method will use the default country hint that sets up in <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder.defaultCountryHint(java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsClientBuilder#defaultCountryHint(String)\"></xref>. If none is specified, service will use 'US' as the country hint.\n\n```java\nDetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage(\"Bonjour tout le monde\");\n System.out.printf(\"Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n\",\n detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Single-Label Classification ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient#beginSingleLabelClassify(java.lang.Iterable,\n java.lang.String, java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"beginSingleLabelClassify\"></xref> beginSingleLabelClassify\\} method returns a list of single-label classification for the provided list of [String][].\n\n**Note:** this method is supported since service API version <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion.V2022_05_01\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion#V2022_05_01\"></xref>.\n\n```java\nList<String> documents = new ArrayList<>();\n for (int i = 0; i < 3; i++) {\n documents.add(\n \"A recent report by the Government Accountability Office (GAO) found that the dramatic increase \"\n + \"in oil and natural gas development on federal lands over the past six years has stretched the\"\n + \" staff of the BLM to a point that it has been unable to meet its environmental protection \"\n + \"responsibilities.\"\n );\n }\n // See the service documentation for regional support and how to train a model to classify your documents,\n // see https://aka.ms/azsdk/textanalytics/customfunctionalities\n SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =\n textAnalyticsClient.beginSingleLabelClassify(documents, \"{project_name}\", \"{deployment_name}\");\n syncPoller.waitForCompletion();\n syncPoller.getFinalResult().forEach(documentsResults -> {\n System.out.printf(\"Project name: %s, deployment name: %s.%n\",\n documentsResults.getProjectName(), documentsResults.getDeploymentName());\n for (ClassifyDocumentResult documentResult : documentsResults) {\n System.out.println(\"Document ID: \" + documentResult.getId());\n for (ClassificationCategory classification : documentResult.getClassifications()) {\n System.out.printf(\"\\tCategory: %s, confidence score: %f.%n\",\n classification.getCategory(), classification.getConfidenceScore());\n }\n }\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n### Multi-Label Classification ###\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient#beginMultiLabelClassify(java.lang.Iterable,\n java.lang.String, java.lang.String)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"beginMultiLabelClassify\"></xref> method returns a list of multi-label classification for the provided list of [String][].\n\n**Note:** this method is supported since service API version <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion.V2022_05_01\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsServiceVersion#V2022_05_01\"></xref>.\n\n```java\nList<String> documents = new ArrayList<>();\n for (int i = 0; i < 3; i++) {\n documents.add(\n \"I need a reservation for an indoor restaurant in China. Please don't stop the music.\"\n + \" Play music and add it to my playlist\");\n }\n SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =\n textAnalyticsClient.beginMultiLabelClassify(documents, \"{project_name}\", \"{deployment_name}\");\n syncPoller.waitForCompletion();\n syncPoller.getFinalResult().forEach(documentsResults -> {\n System.out.printf(\"Project name: %s, deployment name: %s.%n\",\n documentsResults.getProjectName(), documentsResults.getDeploymentName());\n for (ClassifyDocumentResult documentResult : documentsResults) {\n System.out.println(\"Document ID: \" + documentResult.getId());\n for (ClassificationCategory classification : documentResult.getClassifications()) {\n System.out.printf(\"\\tCategory: %s, confidence score: %f.%n\",\n classification.getCategory(), classification.getConfidenceScore());\n }\n }\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n\n--------------------\n\n## Execute multiple actions ##\n\nThe <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsClient#beginAnalyzeActions(java.lang.Iterable,\n com.azure.ai.textanalytics.models.TextAnalyticsActions)\" data-throw-if-not-resolved=\"false\" data-raw-source=\"beginAnalyzeActions\"></xref> method execute actions, such as, entities recognition, PII entities recognition, key phrases extraction, and etc, for a list of [String][].\n\n```java\nList<String> documents = Arrays.asList(\n \"Elon Musk is the CEO of SpaceX and Tesla.\",\n \"My SSN is 859-98-0987\"\n );\n\n SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller =\n textAnalyticsClient.beginAnalyzeActions(\n documents,\n new TextAnalyticsActions().setDisplayName(\"{tasks_display_name}\")\n .setRecognizeEntitiesActions(new RecognizeEntitiesAction())\n .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()));\n syncPoller.waitForCompletion();\n AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult();\n result.forEach(analyzeActionsResult -> {\n System.out.println(\"Entities recognition action results:\");\n analyzeActionsResult.getRecognizeEntitiesResults().forEach(\n actionResult -> {\n if (!actionResult.isError()) {\n actionResult.getDocumentsResults().forEach(\n entitiesResult -> entitiesResult.getEntities().forEach(\n entity -> System.out.printf(\n \"Recognized entity: %s, entity category: %s, entity subcategory: %s,\"\n + \" confidence score: %f.%n\",\n entity.getText(), entity.getCategory(), entity.getSubcategory(),\n entity.getConfidenceScore())));\n }\n });\n System.out.println(\"Key phrases extraction action results:\");\n analyzeActionsResult.getExtractKeyPhrasesResults().forEach(\n actionResult -> {\n if (!actionResult.isError()) {\n actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {\n System.out.println(\"Extracted phrases:\");\n extractKeyPhraseResult.getKeyPhrases()\n .forEach(keyPhrases -> System.out.printf(\"\\t%s.%n\", keyPhrases));\n });\n }\n });\n });\n```\n\nSee [this][] for supported languages in Text Analytics API.\n\n**Note:** For asynchronous sample, refer to <xref uid=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\" data-throw-if-not-resolved=\"false\" data-raw-source=\"com.azure.ai.textanalytics.TextAnalyticsAsyncClient\"></xref>.\n\n\n[Azure AI Language Service]: https://learn.microsoft.com/azure/ai-services/language-service\n[Azure Identity]: https://learn.microsoft.com/java/api/overview/azure/identity-readme?view=azure-java-stable\n[this]: https://aka.ms/talangs\n[String]: https://docs.oracle.com/javase/8/docs/api/java/lang/String.html\n[this 1]: https://aka.ms/azsdk/language/pii"
metadata: {}
package: "com.azure.ai.textanalytics"
artifact: com.azure:azure-ai-textanalytics:5.5.1