一个色的导航资源精品在线观看|手机看片在线精品视频|伊人亚洲成人电影|亚洲欧美在线男女|无码无码在线观看五月精品视频在线|超碰日韩欧美在线|午夜精品蜜桃一区二区久久久|91欧美动态国产精品女主播|色欲色香天天天综合网在线观看免费|伊人春色在线伊人

如酒如糖句子?

時間:2024-08-21 08:49 人氣:0 編輯:招聘街

一、如酒如糖句子?

出自宋詩人蘇軾的《岐亭五首》

  酸酒如虀湯,甜酒如蜜汁。

  三年黃州城,飲酒但飲濕。

  我如更揀擇,一醉豈易得。

  幾思壓茅柴,禁網(wǎng)日夜急。

  西鄰椎甕盎,醉倒豬與鴨。

  君家大如掌,破屋無遮冪。

  何從得此酒,冷面妒君赤。

  定應(yīng)好事人,千石供李白。

  為君三日醉,蓬發(fā)不暇幘。

  夜深欲踰垣,臥想春甕泣。

  君奴亦笑我,鬢齒行禿缺。

  三年已四至,歲歲遭惡客。

  人生幾兩屐,莫厭頻來集。

二、有嗜糖如命嗎?

有的。我看見我同事整天吃糖不住,我勸他少吃糖,他還嗨嗨一下就應(yīng)付一下你。吃糖多了對牙齒.肥胖,可是大大的不利。我建議大家少吃糖,以免有害健康。

三、似糖如蜜、用來形容什么?

似糖如蜜用來形容夫妻或者情侶之間關(guān)系很好,很幸??鞓?,和如膠似漆意思相近,主要還是還是描寫感情深厚互相深情相愛。

四、潮如糖鋪怎么樣?

潮如糖鋪是一家以甜品為主打的餐飲店,店內(nèi)裝修時尚現(xiàn)代,環(huán)境舒適宜人。他們提供各種口味的甜品,從經(jīng)典的奶茶、冰淇淋到創(chuàng)意的甜品拼盤,種類繁多,口味豐富。甜品制作精細,口感豐富,讓人回味無窮。服務(wù)態(tài)度親切周到,價格實惠。總的來說,潮如糖鋪是一家值得推薦的甜品店,無論是與朋友聚會還是休閑時光,都是一個不錯的選擇。

五、烈火如歌雪歌發(fā)糖集數(shù)?

玉師兄這五級掉線了,銀雪作為偽男主總算心疼我們,在40集發(fā)了高能甜糖。

六、嗜糖如命的意思是什么?

嗜糖如命指一個人喜歡吃糖,喜歡到就像愛惜自己的生命一樣。

七、各種糖果的寓意?如:奶糖,棉花糖?

冰糖,棉花糖,水晶糖,麥芽糖,口香糖,泡泡糖,跳跳糖,葡萄糖,草莓糖,奶糖,喜糖, 棒棒糖,水果糖,什錦糖,爆炸糖,大白兔奶糖, 喔喔奶糖, 金絲猴奶糖 ,QQ糖, 香橙味硬糖 ,棒棒糖 ,清涼薄荷糖, 水果糖 ,棉花糖, 咖啡糖, 玉米軟糖, 話梅糖, 椰子糖,太妃糖 ,酥糖, 玉米糖 ,石榴糖,咖啡糖 ,哨子糖,戒指糖,波板塘,清口糖,薄荷糖,清涼糖,檸檬牛奶棒棒糖,草莓酸奶棒棒糖,可樂糖,雪碧糖,酸奶糖,烏龍茶糖,綠茶糖,紅茶糖,橡皮糖,綠箭口香糖,黃箭口香糖,白箭口香糖,益達口香糖,朱古力牛奶糖旺仔牛奶糖, 原味椰子糖,鮮奶椰子糖,奶油椰子糖,咖啡椰子糖,跳跳巧克力糖,酒芯巧克力糖,蹦蹦糖 ,白糖,紅糖,白砂糖,單晶冰糖

八、各種糖果的寓意?如:奶糖,棉花糖?

糖代表甜蜜,吃在嘴里甜在心里!那種感覺只有吃過的人才知道!就如戀愛!每個戀愛階段都是不同味道的糖,愛情越濃密感情越真糖就越香濃。。。初戀是酸酸夾心糖,外面雖有一點酸澀但越到后面越覺得甜蜜!就如初戀那個時候的羞澀!熱戀是水果糖,滿嘴的各色香甜想的只是閉上眼睛回味那一絲一絲的美好!結(jié)婚是咖啡糖,時間久了就更是香濃,可現(xiàn)實的殘酷讓愛情多多少少有了苦澀!但那卻更是應(yīng)該值得細細品味的香甜!糖可以甜如蜜,但是不能沉醉其中,因為理智是預(yù)防蛀牙的最好辦法。。

1. 橡皮糖 輕視的人

2. 水果糖 好朋友

3. 巧克力糖 深愛的人

4. 棉花糖 重視的人

5. 薄荷糖 陌生人

6. 棒棒糖 喜歡的人

7. 口香糖 討厭的人

8. 泡泡糖 不愛的人

9. 奶糖 情人

九、沉香如屑糖周第幾集變回帝君?

男主會在下周一18點后更新,第47集開頭就要恢復(fù)帝君應(yīng)淵,顏淡先是刺了唐周一劍,因為要先死后生,才會歷劫歸位,目前預(yù)告片里,顏淡報著犧牲自己的性命,也要讓應(yīng)淵歸位,恢復(fù)仙力 ,顏淡最后把自己另外一半的心,也渡給了應(yīng)淵,應(yīng)淵就恢復(fù)了,后面劇情就得看星期一的正片了

十、mahout面試題?

之前看了Mahout官方示例 20news 的調(diào)用實現(xiàn);于是想根據(jù)示例的流程實現(xiàn)其他例子。網(wǎng)上看到了一個關(guān)于天氣適不適合打羽毛球的例子。

訓(xùn)練數(shù)據(jù):

Day Outlook Temperature Humidity Wind PlayTennis

D1 Sunny Hot High Weak No

D2 Sunny Hot High Strong No

D3 Overcast Hot High Weak Yes

D4 Rain Mild High Weak Yes

D5 Rain Cool Normal Weak Yes

D6 Rain Cool Normal Strong No

D7 Overcast Cool Normal Strong Yes

D8 Sunny Mild High Weak No

D9 Sunny Cool Normal Weak Yes

D10 Rain Mild Normal Weak Yes

D11 Sunny Mild Normal Strong Yes

D12 Overcast Mild High Strong Yes

D13 Overcast Hot Normal Weak Yes

D14 Rain Mild High Strong No

檢測數(shù)據(jù):

sunny,hot,high,weak

結(jié)果:

Yes=》 0.007039

No=》 0.027418

于是使用Java代碼調(diào)用Mahout的工具類實現(xiàn)分類。

基本思想:

1. 構(gòu)造分類數(shù)據(jù)。

2. 使用Mahout工具類進行訓(xùn)練,得到訓(xùn)練模型。

3。將要檢測數(shù)據(jù)轉(zhuǎn)換成vector數(shù)據(jù)。

4. 分類器對vector數(shù)據(jù)進行分類。

接下來貼下我的代碼實現(xiàn)=》

1. 構(gòu)造分類數(shù)據(jù):

在hdfs主要創(chuàng)建一個文件夾路徑 /zhoujainfeng/playtennis/input 并將分類文件夾 no 和 yes 的數(shù)據(jù)傳到hdfs上面。

數(shù)據(jù)文件格式,如D1文件內(nèi)容: Sunny Hot High Weak

2. 使用Mahout工具類進行訓(xùn)練,得到訓(xùn)練模型。

3。將要檢測數(shù)據(jù)轉(zhuǎn)換成vector數(shù)據(jù)。

4. 分類器對vector數(shù)據(jù)進行分類。

這三步,代碼我就一次全貼出來;主要是兩個類 PlayTennis1 和 BayesCheckData = =》

package myTesting.bayes;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.FileSystem;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.util.ToolRunner;

import org.apache.mahout.classifier.naivebayes.training.TrainNaiveBayesJob;

import org.apache.mahout.text.SequenceFilesFromDirectory;

import org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles;

public class PlayTennis1 {

private static final String WORK_DIR = "hdfs://192.168.9.72:9000/zhoujianfeng/playtennis";

/*

* 測試代碼

*/

public static void main(String[] args) {

//將訓(xùn)練數(shù)據(jù)轉(zhuǎn)換成 vector數(shù)據(jù)

makeTrainVector();

//產(chǎn)生訓(xùn)練模型

makeModel(false);

//測試檢測數(shù)據(jù)

BayesCheckData.printResult();

}

public static void makeCheckVector(){

//將測試數(shù)據(jù)轉(zhuǎn)換成序列化文件

try {

Configuration conf = new Configuration();

conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));

String input = WORK_DIR+Path.SEPARATOR+"testinput";

String output = WORK_DIR+Path.SEPARATOR+"tennis-test-seq";

Path in = new Path(input);

Path out = new Path(output);

FileSystem fs = FileSystem.get(conf);

if(fs.exists(in)){

if(fs.exists(out)){

//boolean參數(shù)是,是否遞歸刪除的意思

fs.delete(out, true);

}

SequenceFilesFromDirectory sffd = new SequenceFilesFromDirectory();

String[] params = new String[]{"-i",input,"-o",output,"-ow"};

ToolRunner.run(sffd, params);

}

} catch (Exception e) {

// TODO Auto-generated catch block

e.printStackTrace();

System.out.println("文件序列化失??!");

System.exit(1);

}

//將序列化文件轉(zhuǎn)換成向量文件

try {

Configuration conf = new Configuration();

conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));

String input = WORK_DIR+Path.SEPARATOR+"tennis-test-seq";

String output = WORK_DIR+Path.SEPARATOR+"tennis-test-vectors";

Path in = new Path(input);

Path out = new Path(output);

FileSystem fs = FileSystem.get(conf);

if(fs.exists(in)){

if(fs.exists(out)){

//boolean參數(shù)是,是否遞歸刪除的意思

fs.delete(out, true);

}

SparseVectorsFromSequenceFiles svfsf = new SparseVectorsFromSequenceFiles();

String[] params = new String[]{"-i",input,"-o",output,"-lnorm","-nv","-wt","tfidf"};

ToolRunner.run(svfsf, params);

}

} catch (Exception e) {

// TODO Auto-generated catch block

e.printStackTrace();

System.out.println("序列化文件轉(zhuǎn)換成向量失?。?#34;);

System.out.println(2);

}

}

public static void makeTrainVector(){

//將測試數(shù)據(jù)轉(zhuǎn)換成序列化文件

try {

Configuration conf = new Configuration();

conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));

String input = WORK_DIR+Path.SEPARATOR+"input";

String output = WORK_DIR+Path.SEPARATOR+"tennis-seq";

Path in = new Path(input);

Path out = new Path(output);

FileSystem fs = FileSystem.get(conf);

if(fs.exists(in)){

if(fs.exists(out)){

//boolean參數(shù)是,是否遞歸刪除的意思

fs.delete(out, true);

}

SequenceFilesFromDirectory sffd = new SequenceFilesFromDirectory();

String[] params = new String[]{"-i",input,"-o",output,"-ow"};

ToolRunner.run(sffd, params);

}

} catch (Exception e) {

// TODO Auto-generated catch block

e.printStackTrace();

System.out.println("文件序列化失??!");

System.exit(1);

}

//將序列化文件轉(zhuǎn)換成向量文件

try {

Configuration conf = new Configuration();

conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));

String input = WORK_DIR+Path.SEPARATOR+"tennis-seq";

String output = WORK_DIR+Path.SEPARATOR+"tennis-vectors";

Path in = new Path(input);

Path out = new Path(output);

FileSystem fs = FileSystem.get(conf);

if(fs.exists(in)){

if(fs.exists(out)){

//boolean參數(shù)是,是否遞歸刪除的意思

fs.delete(out, true);

}

SparseVectorsFromSequenceFiles svfsf = new SparseVectorsFromSequenceFiles();

String[] params = new String[]{"-i",input,"-o",output,"-lnorm","-nv","-wt","tfidf"};

ToolRunner.run(svfsf, params);

}

} catch (Exception e) {

// TODO Auto-generated catch block

e.printStackTrace();

System.out.println("序列化文件轉(zhuǎn)換成向量失?。?#34;);

System.out.println(2);

}

}

public static void makeModel(boolean completelyNB){

try {

Configuration conf = new Configuration();

conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));

String input = WORK_DIR+Path.SEPARATOR+"tennis-vectors"+Path.SEPARATOR+"tfidf-vectors";

String model = WORK_DIR+Path.SEPARATOR+"model";

String labelindex = WORK_DIR+Path.SEPARATOR+"labelindex";

Path in = new Path(input);

Path out = new Path(model);

Path label = new Path(labelindex);

FileSystem fs = FileSystem.get(conf);

if(fs.exists(in)){

if(fs.exists(out)){

//boolean參數(shù)是,是否遞歸刪除的意思

fs.delete(out, true);

}

if(fs.exists(label)){

//boolean參數(shù)是,是否遞歸刪除的意思

fs.delete(label, true);

}

TrainNaiveBayesJob tnbj = new TrainNaiveBayesJob();

String[] params =null;

if(completelyNB){

params = new String[]{"-i",input,"-el","-o",model,"-li",labelindex,"-ow","-c"};

}else{

params = new String[]{"-i",input,"-el","-o",model,"-li",labelindex,"-ow"};

}

ToolRunner.run(tnbj, params);

}

} catch (Exception e) {

// TODO Auto-generated catch block

e.printStackTrace();

System.out.println("生成訓(xùn)練模型失??!");

System.exit(3);

}

}

}

package myTesting.bayes;

import java.io.IOException;

import java.util.HashMap;

import java.util.Map;

import org.apache.commons.lang.StringUtils;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.fs.PathFilter;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.mahout.classifier.naivebayes.BayesUtils;

import org.apache.mahout.classifier.naivebayes.NaiveBayesModel;

import org.apache.mahout.classifier.naivebayes.StandardNaiveBayesClassifier;

import org.apache.mahout.common.Pair;

import org.apache.mahout.common.iterator.sequencefile.PathType;

import org.apache.mahout.common.iterator.sequencefile.SequenceFileDirIterable;

import org.apache.mahout.math.RandomAccessSparseVector;

import org.apache.mahout.math.Vector;

import org.apache.mahout.math.Vector.Element;

import org.apache.mahout.vectorizer.TFIDF;

import com.google.common.collect.ConcurrentHashMultiset;

import com.google.common.collect.Multiset;

public class BayesCheckData {

private static StandardNaiveBayesClassifier classifier;

private static Map<String, Integer> dictionary;

private static Map<Integer, Long> documentFrequency;

private static Map<Integer, String> labelIndex;

public void init(Configuration conf){

try {

String modelPath = "/zhoujianfeng/playtennis/model";

String dictionaryPath = "/zhoujianfeng/playtennis/tennis-vectors/dictionary.file-0";

String documentFrequencyPath = "/zhoujianfeng/playtennis/tennis-vectors/df-count";

String labelIndexPath = "/zhoujianfeng/playtennis/labelindex";

dictionary = readDictionnary(conf, new Path(dictionaryPath));

documentFrequency = readDocumentFrequency(conf, new Path(documentFrequencyPath));

labelIndex = BayesUtils.readLabelIndex(conf, new Path(labelIndexPath));

NaiveBayesModel model = NaiveBayesModel.materialize(new Path(modelPath), conf);

classifier = new StandardNaiveBayesClassifier(model);

} catch (IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

System.out.println("檢測數(shù)據(jù)構(gòu)造成vectors初始化時報錯。。。。");

System.exit(4);

}

}

/**

* 加載字典文件,Key: TermValue; Value:TermID

* @param conf

* @param dictionnaryDir

* @return

*/

private static Map<String, Integer> readDictionnary(Configuration conf, Path dictionnaryDir) {

Map<String, Integer> dictionnary = new HashMap<String, Integer>();

PathFilter filter = new PathFilter() {

@Override

public boolean accept(Path path) {

String name = path.getName();

return name.startsWith("dictionary.file");

}

};

for (Pair<Text, IntWritable> pair : new SequenceFileDirIterable<Text, IntWritable>(dictionnaryDir, PathType.LIST, filter, conf)) {

dictionnary.put(pair.getFirst().toString(), pair.getSecond().get());

}

return dictionnary;

}

/**

* 加載df-count目錄下TermDoc頻率文件,Key: TermID; Value:DocFreq

* @param conf

* @param dictionnaryDir

* @return

*/

private static Map<Integer, Long> readDocumentFrequency(Configuration conf, Path documentFrequencyDir) {

Map<Integer, Long> documentFrequency = new HashMap<Integer, Long>();

PathFilter filter = new PathFilter() {

@Override

public boolean accept(Path path) {

return path.getName().startsWith("part-r");

}

};

for (Pair<IntWritable, LongWritable> pair : new SequenceFileDirIterable<IntWritable, LongWritable>(documentFrequencyDir, PathType.LIST, filter, conf)) {

documentFrequency.put(pair.getFirst().get(), pair.getSecond().get());

}

return documentFrequency;

}

public static String getCheckResult(){

Configuration conf = new Configuration();

conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));

String classify = "NaN";

BayesCheckData cdv = new BayesCheckData();

cdv.init(conf);

System.out.println("init done...............");

Vector vector = new RandomAccessSparseVector(10000);

TFIDF tfidf = new TFIDF();

//sunny,hot,high,weak

Multiset<String> words = ConcurrentHashMultiset.create();

words.add("sunny",1);

words.add("hot",1);

words.add("high",1);

words.add("weak",1);

int documentCount = documentFrequency.get(-1).intValue(); // key=-1時表示總文檔數(shù)

for (Multiset.Entry<String> entry : words.entrySet()) {

String word = entry.getElement();

int count = entry.getCount();

Integer wordId = dictionary.get(word); // 需要從dictionary.file-0文件(tf-vector)下得到wordID,

if (StringUtils.isEmpty(wordId.toString())){

continue;

}

if (documentFrequency.get(wordId) == null){

continue;

}

Long freq = documentFrequency.get(wordId);

double tfIdfValue = tfidf.calculate(count, freq.intValue(), 1, documentCount);

vector.setQuick(wordId, tfIdfValue);

}

// 利用貝葉斯算法開始分類,并提取得分最好的分類label

Vector resultVector = classifier.classifyFull(vector);

double bestScore = -Double.MAX_VALUE;

int bestCategoryId = -1;

for(Element element: resultVector.all()) {

int categoryId = element.index();

double score = element.get();

System.out.println("categoryId:"+categoryId+" score:"+score);

if (score > bestScore) {

bestScore = score;

bestCategoryId = categoryId;

}

}

classify = labelIndex.get(bestCategoryId)+"(categoryId="+bestCategoryId+")";

return classify;

}

public static void printResult(){

System.out.println("檢測所屬類別是:"+getCheckResult());

}

}

相關(guān)資訊
熱門頻道

Copyright © 2024 招聘街 滇ICP備2024020316號-38