女人自慰AV免费观看内涵网,日韩国产剧情在线观看网址,神马电影网特片网,最新一级电影欧美,在线观看亚洲欧美日韩,黄色视频在线播放免费观看,ABO涨奶期羡澄,第一导航fulione,美女主播操b

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫(xiě)文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

簡(jiǎn)述開(kāi)放域長(zhǎng)格式問(wèn)答系統(tǒng)的進(jìn)步與挑戰(zhàn)

Tensorflowers ? 來(lái)源:Google Research 研究員 Aurko ? 作者:Google Research 研究員 ? 2021-05-31 10:02 ? 次閱讀

發(fā)布人:Google Research 研究員 Aurko Roy

開(kāi)放域長(zhǎng)格式問(wèn)答 (LFQA) 是自然語(yǔ)言處理 (NLP) 的一項(xiàng)基礎(chǔ)挑戰(zhàn),涉及檢索與給定問(wèn)題相關(guān)的文檔,并使用這些文檔來(lái)生成一段詳盡答案。在事實(shí)型開(kāi)放域問(wèn)答 (QA) 中,簡(jiǎn)單的短語(yǔ)或?qū)嶓w便足以回答問(wèn)題。雖然我們近期在這一方面取得了顯著進(jìn)展,但在長(zhǎng)格式問(wèn)答領(lǐng)域中卻做得遠(yuǎn)遠(yuǎn)不夠。盡管如此,LFQA 仍是一項(xiàng)非常重要的任務(wù),特別是它能提供一個(gè)測(cè)試平臺(tái)來(lái)衡量生成文本模型的真實(shí)性。但是,當(dāng)前的基準(zhǔn)和評(píng)估指標(biāo)真的能在 LFQA 方面取得進(jìn)展嗎?

在“在長(zhǎng)格式問(wèn)答領(lǐng)域取得進(jìn)展的障礙”(Hurdles to Progress in Long-form Question Answering)(將在 NAACL 2021 會(huì)議上發(fā)表)中,我們介紹了一種新的開(kāi)放域長(zhǎng)格式問(wèn)答系統(tǒng),它利用了 NLP 的兩項(xiàng)最新進(jìn)展:

1.最先進(jìn)的稀疏注意力模型(例如 Routing Transformer(RT)),能夠?qū)⒒谧⒁饬Φ哪P蛿U(kuò)展至長(zhǎng)序列;

2.基于檢索的模型(例如 REALM),有助于檢索與給定查詢(xún)相關(guān)的維基百科文章。

Routing Transformer

https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00353

為獲得更多的事實(shí)依據(jù),對(duì)于檢索到的與給定問(wèn)題相關(guān)的一些維基百科文章,我們的系統(tǒng)會(huì)在答案生成之前將從中獲得的信息結(jié)合起來(lái) ELI5 是唯一一個(gè)可用于長(zhǎng)格式問(wèn)答的大規(guī)模公開(kāi)數(shù)據(jù)集,我們的系統(tǒng)在該數(shù)據(jù)集上取得了突破性進(jìn)展。

ELI5

https://ai.facebook.com/blog/longform-qa/

不過(guò),雖然這個(gè)系統(tǒng)在公開(kāi)排行榜上名列前茅,但我們發(fā)現(xiàn) ELI5 數(shù)據(jù)集及其相關(guān)評(píng)估指標(biāo)的一些趨勢(shì)令人擔(dān)憂。特別要強(qiáng)調(diào)的是,我們發(fā)現(xiàn) 1) 幾乎沒(méi)有證據(jù)表明模型實(shí)際使用了它們所要求的檢索;2) 平凡基線(例如輸入復(fù)制)擊敗了現(xiàn)代系統(tǒng),如 RAG/BART+DPR;以及 3) 數(shù)據(jù)集中存在大量訓(xùn)練/驗(yàn)證重疊。我們的論文針對(duì)每一個(gè)問(wèn)題提出了緩解策略。

輸入復(fù)制

https://eval.ai/web/challenges/challenge-page/689/leaderboard/1908#leaderboardrank-6

文本生成

NLP 模型的核心要件是 Transformer 架構(gòu),其序列中的每個(gè) Token 都會(huì)關(guān)注序列中的其他所有 Toekn,從而形成一個(gè)隨序列長(zhǎng)度呈二次增長(zhǎng)的模型。RT 模型引入了一種基于內(nèi)容的動(dòng)態(tài)稀疏注意力機(jī)制,將 Transformer 模型中的注意力復(fù)雜度從 n2 降到了 n1.5( 其中 n 是序列長(zhǎng)度),使其能夠擴(kuò)展到長(zhǎng)序列。這使得每個(gè)單詞都可以關(guān)注整個(gè)文本中 任何地方的其他相關(guān)單詞, 而不像 Transformer XL 等類(lèi)似方法,一個(gè)單詞只能關(guān)注其附近的單詞。

RT 發(fā)揮作用的關(guān)鍵在于每個(gè) Token 對(duì)其他 Token 的關(guān)注通常是冗余的,并且可以通過(guò)結(jié)合局部和全局注意力進(jìn)行估算。局部注意力允許每個(gè) Token 在模型的幾個(gè)層上建立一個(gè)局部表征,其中每個(gè) Token 關(guān)注一個(gè)局部鄰域,從而達(dá)到局部的一致性和流暢性。作為對(duì)局部注意力的補(bǔ)充,RT 模型還使用了小批量 k-均值集群, 使每個(gè) Token 只關(guān)注一組最相關(guān)的 Token 。

我們以語(yǔ)言建模為目標(biāo),使用 ProjectGutenberg(PG-19) 數(shù)據(jù)集預(yù)先訓(xùn)練了一個(gè) RT 模型,即在給定前面所有單詞的情況下,讓該模型學(xué)會(huì)預(yù)測(cè)下一個(gè)單詞,從而能夠生成流利的段落長(zhǎng)文本。

ProjectGutenberg(PG-19)

https://deepmind.com/blog/article/A_new_model_and_dataset_for_long-range_memory

信息檢索

為了證明 RT 模型在 LFQA 任務(wù)中的有效性,我們將其與 REALM 中檢索到的內(nèi)容結(jié)合使用。REALM 模型(Guu 等人于 2020 年發(fā)布)是基于檢索的模型,使用最大內(nèi)積搜索來(lái)檢索與特定查詢(xún)或問(wèn)題相關(guān)的維基百科文章。我們對(duì)該模型進(jìn)行了微調(diào),以便根據(jù)自然問(wèn)題數(shù)據(jù)集作出事實(shí)型問(wèn)答。REALM 利用 BERT 模型學(xué)習(xí)問(wèn)題的良好表征,并使用 SCANN 檢索與問(wèn)題表征具有高度主題相似性的維基百科文章。接著進(jìn)行端到端訓(xùn)練,以最大程度地提高 QA 任務(wù)的對(duì)數(shù)似然值。

通過(guò)使用對(duì)比損失,我們進(jìn)一步提高了 REALM 檢索的質(zhì)量。其背后的想法是讓問(wèn)題表征更靠近其基本事實(shí)答案,并與其他小批量答案有所不同。這樣可以確保,當(dāng)系統(tǒng)使用此問(wèn)題表征檢索相關(guān)項(xiàng)目時(shí),會(huì)返回與基本事實(shí)答案“類(lèi)似”的文章。我們稱(chēng)這種檢索器為對(duì)比型-REALM 或 c-REALM。

對(duì)比損失

https://towardsdatascience.com/contrastive-loss-explaned-159f2d4a87ec

評(píng)估

我們使用 ELI5 數(shù)據(jù)集對(duì)該模型進(jìn)行了長(zhǎng)格式問(wèn)答測(cè)試。ELI5 數(shù)據(jù)集是 KILT 基準(zhǔn)的一部分, 也是唯一 公開(kāi)可用的大規(guī)模 LFQA 數(shù)據(jù)集。KILT 基準(zhǔn)使用準(zhǔn)確率 (R-Prec) 衡量文本檢索質(zhì)量,并使用 ROUGE-L 衡量文本生成質(zhì)量。將這兩項(xiàng)得分相加即可得出一個(gè) KILT R-L 得分,該得分決定了模型在排行榜上的排名。在 KILT 的 ELI5 數(shù)據(jù)集上,我們對(duì)預(yù)先訓(xùn)練好的 RT 模型和 c-REALM 檢索進(jìn)行了微調(diào)。

我們提交的內(nèi)容在 ELI5 長(zhǎng)格式問(wèn)答 KILT 排行榜上名列前茅,綜合 KILT R-L 得分為 2.36。對(duì)比先前 BART + DPR 排行榜的排名(KILT R-L 得分為 1.9)有了提升,同時(shí)具有與排行榜上其他模型相似的參數(shù)數(shù)量。在文本生成質(zhì)量方面,我們發(fā)現(xiàn) Rouge-L 與 T5、BART + DPR 和 RAG 相比,得分分別提高了 4.11、5.78 和 9.14。

排行榜

https://eval.ai/web/challenges/challenge-page/689/leaderboard/1908

RT 模型生成的示例

Q1

Why are almost all boats white?

Boats are generally painted white for obvious reasons, as they are inexpensive and easy to clean. Ships are also often painted to provide a visible visual contrast to the sea. Boats also often have high visibility ports to allow for easy international trade.

In addition, many boats have an air-breathing function that makes them easier to spot and stay afloat, which is a good thing if you're a pirate or trying to get into some country's waters. Lastly, many ships are built in the last few decades, so they are not as exotic looking as they were thirty years ago.

And lastly, many ships are made out of materials that are easy to work with, and so many of them are painted white as a practical matter. There are many, many more reasons for why ships are white, but those are the ones I know of. I'm sure there are many other reasons, but those are the ones I know of.

Edit: I forgot to mention that the other answers about visibility and easy to see ports are also true. Ships are painted to appear to be larger, and more expensive. That makes them appear more impressive, and so they are more likely to be bought by people who want to show off their wealth and status.

Q2

What causes the trail behind jets at high altitude?

The same thing that causes the clouds you see at the start of a storm to form in the first place. The air pressure changes as the jet moves through the air, causing it to cool down and condense into water droplets. These droplets are in fact a lot bigger than the water droplets in the clouds, so when the plane passes through them, they can cause the air pressure to change, causing the water droplets to condense into clouds.

This is what you see when you look at a cloud formation from the ground. In addition to this, when the plane passes through the clouds, it also moves through the air, causing it to change its temperature, causing the water droplets to condense and form clouds.

These clouds are often quite large, and can block out the sun. This is what you see when you look at a contrail. Finally, the water vapor that forms at high altitudes is cooled by the air above, and as it cools, it condenses into clouds. This is what you see when you look at a cumulonimbus cloud. These clouds are even larger than the water droplets in the clouds, and they block out the sun.

Q3

How do shampoo and conditioner work?

The key ingredients in shampoo and conditioner are usually surfactants, they tend to attract oil from your skin. Shampoo will usually contain an anti-dandruff agent (dandruff is caused by fungi in your skin which cause dandruff). They also contain some moisturizing agents (moisturizers are things that stay in your skin longer and protect your scalp from cleaning),

And a lot of anti-dandruff (dandruff is a side effect of high doses of both surfactants and moisturizers) Conditioner will contain moisturizing agents, like a skin moisturizer. They tend to cause oils to be separated from your hair (e.g. wash your hair with conditioner instead of shampoo). They both have a foaming agent that makes them foam up (usually Hydrogen Peroxide). They also have a foaming agent that stops dirt from attaching to your hair. Some conditioner will also suppress the action of the shampoo (to help prevent dirt from attaching to your hair).

Q4

When glancing at a clock, why does the first second after glancing at it sometimes feel longer than the rest?

This is known as chronostasis. Your brain gets used to seeing the second hand move after the first, so it ignores the first one. If you were to look at the clock a lot, you'd see the second hand move a lot more than the first, and you'd be able to tell the difference between the second and the first.

The second hand will appear to move much slower than the first hand because your brain is used to seeing the second hand move a lot more than the first. If you were to go on a long trip, you'd get used to the second hand moving a lot more than the first, and you'd have no problem telling the difference between the second and the first hand.

It's kind of like how you don't notice your nose when you're driving, but if you look at it for a while, you start to notice it. Your brain is just getting used to seeing the nose move a lot more than the other, and it can't tell the difference between the two. It's kind of like that with your eyes.

Q5

Why do Britain and other English empire countries still bow to monarchs? What real purpose does the queen serve?

Depends what you define as "purpose". There is no specific purpose for the British monarch - rather, they are figureheads that serve a role. The role is to have a high degree of influence over a country, though the rules of the monarchy can somewhat vary from country to country, and the ceremony of "kissing the royal corpse" does not always involve kissing the monarch. Whether that's the only reason for the custom, or if it was the only reason,

I can't say, but that's the reasoning. When the U.K. was in full power, the monarch was a puppet of Parliament, and the powers of the monarch were transferred to the Prime Minister and thus to the Prime Minister's deputy, who then became the Prime Minister. Since then, Parliament has been able to vote on legislation that goes through the monarch,

although they may still act as the monarch's representative in negotiating treaties, which can have very very deep consequences. The Queen's role, as a representative of Britain, doesn't necessarily involve her formal approval of any of the laws or legislation that goes through Parliament, though.

Q6

What exactly is fire, in detail? How can light and heat come from something we can't really touch?

Fire is the chemical reaction of fuel (oxygen) with (what we call) impurities (ash, soot, oil etc). These impurities are created due to a number of things. If you were to study the chemical composition of a log (or a candle) and analyse it you would discover the chemical formula (A, C, and G) which is a simplified version of what actually happens. There is always fuel, and there is always the ash/soot. Wood is made up of mostly carbon and hydrogen and these are the fuels that create the fire. Hydrogen and oxygen are reactive, and they form water (H2O) and CO2 (CO2). If we split a log into smaller pieces, we start to see the process of combustion as it occurs. The carbon is burned, and with the explosion of CO, H2O and H2CO2, we get water and carbon dioxide (mostly). If we smash a log together, we start to see what happens when we also hit another log with another log...imagine smashing two sand castles together. We see how the sand castles and brick castles form and collapse at different rates...then it starts to burn. The smoke, ash and flames are the actual combustion of fuel.

Q7

What goes on in those tall tower buildings owned by major banks?

The actual buildings are not there for profit but for show. There are many real world historical buildings that have little to no effect other than being a symbol of an economic entity or symbol of an era or location, and nothing more. For example look at Sears, Sears didn't care what went on inside, it was all about the \_appearance\_ of its location, the prestige of the location, the facilities and so on. It didn't care about how long it took it to operate, it was about how much people would pay to go see it. Sears was a landmark as a cultural movement and other big companies followed suit, so if you want to see a building you've never seen before, you have to go see Sears, just like you have to see a Toyota Camry for Toyota Camry. They used to be all about building new factories, some of them if I recall, but now that they're bigger, that means that more factory jobs are coming to them. You've probably seen them in stores as stores where people buy and sell stuff, so there aren't that many places for them to come from. Instead, it's just for show, a symbol of rich people.

在 LFQA 領(lǐng)域取得進(jìn)展的障礙

不過(guò),雖然此處介紹的 RT 系統(tǒng)在公開(kāi)排行榜中名列前茅,但對(duì)模型和 ELI5 數(shù)據(jù)庫(kù)的詳細(xì)分析仍揭示了一些令人擔(dān)憂的趨勢(shì)。

Train/Valid Overlap

Many held-out questions are paraphrased in the training set. Best answer to similar train questions gets 27.4 ROUGE-L.

Lack of Grounding

Conditioning answer generation on random documents instead of relevant ones does not measurably impact its factual correctness. Longer outputs get higher ROUGE-L.

我們發(fā)現(xiàn),幾乎沒(méi)有任何證據(jù)表明模型會(huì)將其文本生成實(shí)際定位到檢索文檔中。與 Wikipedia 中的隨機(jī)檢索搭配使用的微調(diào) RT 模型(例如,隨機(jī)檢索 + RT),幾乎與 c-REALM + RT 模型(24.2 與 24.4 ROUGE-L)表現(xiàn)得一樣好。在訓(xùn)練、驗(yàn)證和測(cè)試 ELI5 數(shù)據(jù)集時(shí),我們還發(fā)現(xiàn)了很多的重疊(幾個(gè)問(wèn)題相互解釋?zhuān)虼丝赡懿辉傩枰獧z索。KILT 基準(zhǔn)會(huì)單獨(dú)衡量檢索和生成的質(zhì)量,但不確定文本生成是否會(huì)在實(shí)際情況中使用檢索。

與 RAG 和 BART + DPR 相比,平凡基線會(huì)獲得更高的 Rouge-L 分?jǐn)?shù)

此外,在使用 Rouge-L 指標(biāo)和平凡無(wú)意義基線(如隨機(jī)訓(xùn)練集答案和輸入復(fù)制)來(lái)評(píng)估文本生成質(zhì)量的過(guò)程中,我們發(fā)現(xiàn)了一些問(wèn)題,并導(dǎo)致 Rouge-L 分?jǐn)?shù)相對(duì)較高(甚至超過(guò)了 BART + DPR 和 RAG)。

結(jié)論

我們?yōu)榛?Routing Transformers 和 REALM 的長(zhǎng)格式問(wèn)答推出了一個(gè)系統(tǒng),該系統(tǒng)在關(guān)于 ELI5 的 KILT 排行榜中名列前茅。但是,詳細(xì)的分析揭示了存在的一些問(wèn)題,即無(wú)法使用基準(zhǔn)來(lái)顯示有意義的建模進(jìn)展。我們希望社區(qū)共同合作,一起解決這些問(wèn)題,以便研究人員向正確的高峰攀登,在這個(gè)充滿挑戰(zhàn)但十分重要的任務(wù)中取得有意義的進(jìn)展。

致謝

Routing Transformer 是 Aurko Roy、Mohammad Saffar、Ashish Vaswani 和 David Grangier 等人進(jìn)行團(tuán)隊(duì)協(xié)作的結(jié)果。有關(guān)開(kāi)放域長(zhǎng)格式問(wèn)答的后續(xù)工作是由 Kalpesh Krishna、Aurko Roy 和 Mohit Iyyer 協(xié)作完成的。我們要感謝 Vidhisha Balachandran、Niki Parmar 和 Ashish Vaswani 提供的多條實(shí)用意見(jiàn),感謝 REALM 團(tuán)隊(duì) (Kenton Lee、Kelvin Guu、Ming-Wei Chang 和 Zora Tung) 在代碼庫(kù)方面提供的幫助以及多條實(shí)用意見(jiàn),這些意見(jiàn)幫助我們進(jìn)一步完善了實(shí)驗(yàn)。

我們非常感謝 Tu Vu 針對(duì) QQP 分類(lèi)器提供的幫助,這些分類(lèi)器用于在 ELI5 訓(xùn)練集和測(cè)試集中檢測(cè)解釋。感謝 Jules Gagnon-Marchand 和 Sewon Min 對(duì)檢查 ROUGE-L 邊界提供的有用實(shí)驗(yàn)建議。最后,感謝 Shufan Wang、Andrew Drozdov、Nader Akoury 以及 UMass NLP 小組的其他成員針對(duì)項(xiàng)目的不同階段提出的實(shí)用意見(jiàn)和建議。

編輯:jq

聲明:本文內(nèi)容及配圖由入駐作者撰寫(xiě)或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問(wèn)題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • 數(shù)據(jù)集
    +關(guān)注

    關(guān)注

    4

    文章

    1223

    瀏覽量

    25283
  • nlp
    nlp
    +關(guān)注

    關(guān)注

    1

    文章

    490

    瀏覽量

    22492

原文標(biāo)題:開(kāi)放域長(zhǎng)格式問(wèn)答系統(tǒng)的進(jìn)步與挑戰(zhàn)

文章出處:【微信號(hào):tensorflowers,微信公眾號(hào):Tensorflowers】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。

收藏 人收藏

    評(píng)論

    相關(guān)推薦
    熱點(diǎn)推薦

    BMS HIL測(cè)試技術(shù)演進(jìn):高壓架構(gòu)、多融合與儲(chǔ)能系統(tǒng)應(yīng)用解析

    BMS通信適配、國(guó)標(biāo)充電協(xié)議迭代、多融合協(xié)同測(cè)試等關(guān)鍵技術(shù)挑戰(zhàn),并拓展至儲(chǔ)能BMS主動(dòng)均衡與多系統(tǒng)仿真需求,助力行業(yè)應(yīng)對(duì)能源變革。
    的頭像 發(fā)表于 05-19 14:56 ?798次閱讀
    BMS HIL測(cè)試技術(shù)演進(jìn):高壓架構(gòu)、多<b class='flag-5'>域</b>融合與儲(chǔ)能<b class='flag-5'>系統(tǒng)</b>應(yīng)用解析

    長(zhǎng)電科技發(fā)布2024年度ESG報(bào)告:創(chuàng)新驅(qū)動(dòng)綠色發(fā)展,共建開(kāi)放協(xié)同生態(tài)

    2025年4月20日,長(zhǎng)電科技(600584.SH)正式發(fā)布《2024年度環(huán)境、社會(huì)及治理(ESG)報(bào)告》,系統(tǒng)展示公司在ESG戰(zhàn)略引領(lǐng)下的全面布局與突破實(shí)踐。 作為全球領(lǐng)先的集成電路成品制造企業(yè)
    的頭像 發(fā)表于 04-21 14:11 ?503次閱讀

    中科曙光助力中航結(jié)算公司構(gòu)建私文檔智能問(wèn)答系統(tǒng)

    近日,中航結(jié)算公司(中國(guó)航空結(jié)算有限責(zé)任公司)依托曙光AI解決方案提供的強(qiáng)勁算力,協(xié)同DeepAI深算智能引擎快速適配、調(diào)優(yōu)、上線DeepSeek大模型業(yè)務(wù)平臺(tái)?;贒eepSeek底座,中航結(jié)算公司構(gòu)建了以RAG增強(qiáng)檢索為核心的私文檔智能問(wèn)答
    的頭像 發(fā)表于 03-19 15:40 ?402次閱讀

    基于華為云 Flexus 云服務(wù)器 X 搭建部署——AI 知識(shí)庫(kù)問(wèn)答系統(tǒng)(使用 1panel 面板安裝)

    ???對(duì)于企業(yè)來(lái)講為什么需要華為云 Flexus X 來(lái)搭建自己的知識(shí)庫(kù)問(wèn)答系統(tǒng)??? 【重塑知識(shí)邊界,華為云 Flexus 云服務(wù)器 X 引領(lǐng)開(kāi)源問(wèn)答新紀(jì)元!】 ???解鎖知識(shí)新動(dòng)力,華為云
    的頭像 發(fā)表于 01-17 09:45 ?1623次閱讀
    基于華為云 Flexus 云服務(wù)器 X 搭建部署——AI 知識(shí)庫(kù)<b class='flag-5'>問(wèn)答</b><b class='flag-5'>系統(tǒng)</b>(使用 1panel 面板安裝)

    儀器知識(shí)問(wèn)答小課堂

    關(guān)于儀器設(shè)備實(shí)驗(yàn)中的各種知識(shí)問(wèn)題的問(wèn)答
    的頭像 發(fā)表于 12-27 16:21 ?409次閱讀
    儀器知識(shí)<b class='flag-5'>問(wèn)答</b>小課堂

    混合示波器的原理和應(yīng)用

    ,從而進(jìn)行深入的測(cè)量和分析。 二、應(yīng)用 捕獲和分析復(fù)雜信號(hào):混合示波器能夠同時(shí)捕獲時(shí)間相關(guān)的模擬、數(shù)字和射頻信號(hào),從而獲得完整的系統(tǒng)級(jí)觀測(cè)。這使得工程師能夠快速解決復(fù)雜的設(shè)計(jì)問(wèn)題,如定位和分析電路中
    發(fā)表于 12-27 15:54

    一文了解底盤(pán)控之制動(dòng)功能

    1底盤(pán)控基礎(chǔ)1.1底盤(pán)控的概念1.2線控底盤(pán)技術(shù)1)線控制動(dòng)系統(tǒng)2)線控轉(zhuǎn)向系統(tǒng)2制動(dòng)系統(tǒng)功能2.1制動(dòng)
    的頭像 發(fā)表于 12-13 16:46 ?1706次閱讀
    一文了解底盤(pán)<b class='flag-5'>域</b>控之制動(dòng)功能

    TE一站式解決方案,助您“”見(jiàn)未來(lái)

    在汽車(chē)電子系統(tǒng)中, “”的概念源于車(chē)輛功能的不斷增加和電子電氣架構(gòu)的迭代升級(jí)。傳統(tǒng)的分布式控制架構(gòu)難以高效管理眾多功能,因此,車(chē)輛系統(tǒng)逐漸被劃分為不同的“功能”(如動(dòng)力
    的頭像 發(fā)表于 11-04 10:20 ?578次閱讀

    50萬(wàn)獎(jiǎng)金池!開(kāi)放原子大賽——第二屆OpenHarmony創(chuàng)新應(yīng)用挑戰(zhàn)賽正式啟動(dòng)

    第二屆OpenHarmony創(chuàng)新應(yīng)用挑戰(zhàn)賽作為開(kāi)放原子大賽旗下的重要賽項(xiàng),聚焦 OpenHarmony應(yīng)用開(kāi)發(fā),致力提升開(kāi)發(fā)者的動(dòng)手實(shí)踐能力與開(kāi)發(fā)創(chuàng)新應(yīng)用的能力。 賽項(xiàng)要求開(kāi)發(fā)者
    發(fā)表于 10-24 15:40

    短文6:關(guān)于功率因素的有趣問(wèn)答

    2個(gè)關(guān)于功率因素的有趣問(wèn)答
    的頭像 發(fā)表于 09-23 12:22 ?373次閱讀

    【?嵌入式機(jī)電一體化系統(tǒng)設(shè)計(jì)與實(shí)現(xiàn)?閱讀體驗(yàn)】+磁力輸送機(jī)系統(tǒng)設(shè)計(jì)的創(chuàng)新與挑戰(zhàn)

    磁力輸送機(jī)系統(tǒng)設(shè)計(jì)的創(chuàng)新與挑戰(zhàn) 在現(xiàn)代工程技術(shù)領(lǐng)域,磁力輸送機(jī)系統(tǒng)作為一項(xiàng)前沿技術(shù),正逐漸成為提高物流效率、減少能耗、增強(qiáng)系統(tǒng)穩(wěn)定性的關(guān)鍵解決方案。本報(bào)告基于文獻(xiàn)[22]的介紹,深入探
    發(fā)表于 09-14 22:44

    浪潮信息趙帥:開(kāi)放計(jì)算創(chuàng)新 應(yīng)對(duì)Scaling Law挑戰(zhàn)

    Scaling Law帶來(lái)的AI基礎(chǔ)設(shè)施Scale up和Scale out的挑戰(zhàn),數(shù)據(jù)中心需要以開(kāi)放創(chuàng)新加速算力系統(tǒng)、管理和基礎(chǔ)設(shè)施的全向Scale進(jìn)程,推動(dòng)AI產(chǎn)業(yè)的創(chuàng)新發(fā)展。 ? 開(kāi)源開(kāi)
    的頭像 發(fā)表于 08-15 16:02 ?476次閱讀
    浪潮信息趙帥:<b class='flag-5'>開(kāi)放</b>計(jì)算創(chuàng)新 應(yīng)對(duì)Scaling Law<b class='flag-5'>挑戰(zhàn)</b>

    can數(shù)據(jù)幀有哪幾個(gè)組成

    CAN總線上傳輸數(shù)據(jù)的格式。一個(gè)典型的CAN數(shù)據(jù)幀由多個(gè)組成,包括幀起始、仲裁、控制、數(shù)據(jù)、CRC
    的頭像 發(fā)表于 07-24 15:14 ?1746次閱讀

    CAN數(shù)據(jù)幀的各個(gè)及其作用

    CAN(Controller Area Network)是一種用于汽車(chē)電子系統(tǒng)中的通信協(xié)議,它具有高可靠性、實(shí)時(shí)性和靈活性等特點(diǎn)。在CAN通信中,數(shù)據(jù)幀是最基本的通信單元,用于傳輸信息。 概述
    的頭像 發(fā)表于 07-24 15:10 ?2588次閱讀

    llm模型有哪些格式

    LLM(Large Language Model,大型語(yǔ)言模型)是一種深度學(xué)習(xí)模型,主要用于處理自然語(yǔ)言處理(NLP)任務(wù)。LLM模型的格式多種多樣,以下是一些常見(jiàn)的LLM模型格式
    的頭像 發(fā)表于 07-09 09:59 ?1302次閱讀