1、審核單據(jù)并編制記賬憑證:審核各類(lèi)原始單據(jù),審核無(wú)誤后根據(jù)各種單據(jù)編制記賬憑證;
2、結(jié)賬并編制報(bào)表:月末結(jié)賬,按時(shí)編制各種會(huì)計(jì)報(bào)表,做到數(shù)字真實(shí)、計(jì)算準(zhǔn)確、內(nèi)容完整、說(shuō)明清楚、報(bào)送及時(shí);
3、檢查并監(jiān)督庫(kù)存材料領(lǐng)用等情況:負(fù)責(zé)檢查、核實(shí)庫(kù)存材料,對(duì)各種庫(kù)存材料的購(gòu)入、領(lǐng)用情況進(jìn)行監(jiān)督;
4、監(jiān)督財(cái)務(wù)運(yùn)作并核對(duì)賬目:負(fù)責(zé)監(jiān)督公司財(cái)務(wù)運(yùn)作情況,及時(shí)出納核對(duì)現(xiàn)金,銀行賬單、各類(lèi)憑證、單據(jù),做到賬款、票據(jù)數(shù)目清楚;
5、計(jì)算稅金以及填表交納:負(fù)責(zé)公司稅金的計(jì)算,填寫(xiě)稅務(wù)申報(bào)表及交納工作;
6、核銷(xiāo)管理發(fā)票:負(fù)責(zé)發(fā)票的核銷(xiāo)與管理,認(rèn)真保管發(fā)票,嚴(yán)禁丟失;
7、整理憑證并保管資料:月末整理裝訂憑證,安全且完整地保管財(cái)務(wù)資料及會(huì)計(jì)檔案;
8、完成其他工作:完成上級(jí)分派的其他相關(guān)工作任務(wù);
9、協(xié)助稅務(wù)部門(mén):工作協(xié)同與稅務(wù)部門(mén)的工作,加強(qiáng)學(xué)習(xí),掌握政策;
10、核對(duì)賬目并抽查倉(cāng)庫(kù):月末與庫(kù)管員進(jìn)行核對(duì),做到賬實(shí)相符,并對(duì)倉(cāng)庫(kù)進(jìn)行不定期抽查。
財(cái)務(wù)會(huì)計(jì)可以通過(guò)參加相關(guān)考試來(lái)獲取證書(shū),并證明具有相關(guān)知識(shí)和能力。其中比較常見(jiàn)的考試有注冊(cè)會(huì)計(jì)師考試和會(huì)計(jì)從業(yè)資格考試等。考試內(nèi)容主要圍繞財(cái)務(wù)會(huì)計(jì)、管理會(huì)計(jì)、財(cái)務(wù)管理、稅法等方面展開(kāi)??荚囆问揭话銥檫x擇題和主觀題,需要考生具備相應(yīng)的知識(shí)儲(chǔ)備、題目解讀和思考能力。通過(guò)財(cái)務(wù)會(huì)計(jì)考試可以提高自身的職場(chǎng)競(jìng)爭(zhēng)力,有助于職業(yè)發(fā)展和晉升。
所謂傳統(tǒng)會(huì)計(jì),是指以歷史成本作為資產(chǎn)計(jì)價(jià)依據(jù)的會(huì)計(jì)實(shí)務(wù),由于它在西方國(guó)家沿用已久,所以稱(chēng)為傳統(tǒng)會(huì)計(jì)。
概念
美國(guó)早期著名會(huì)計(jì)學(xué)家佩頓和利特爾頓早在1940年,為美國(guó)會(huì)計(jì)學(xué)會(huì)所編寫(xiě)的一篇專(zhuān)著《公司會(huì)計(jì)準(zhǔn)則介紹》曾對(duì)西方國(guó)家傳統(tǒng)會(huì)計(jì)實(shí)務(wù)所依據(jù)的理論,作了非常清晰的說(shuō)明,直到目前,仍然被廣泛地引用。
特點(diǎn)
1、強(qiáng)調(diào)收益的計(jì)量。
2、強(qiáng)調(diào)利益的行為屬性。
3、強(qiáng)調(diào)成本歸屬概念。
4、強(qiáng)調(diào)成本流轉(zhuǎn)觀念。
換言之,傳統(tǒng)會(huì)計(jì)的特點(diǎn)為
1.它所履行的主要是對(duì)投資者的會(huì)計(jì)責(zé)任,特別是資產(chǎn)的經(jīng)營(yíng)職責(zé),即投資的結(jié)果。
2.強(qiáng)調(diào)收益的計(jì)量,并依據(jù)以下原則計(jì)量收益:(1)收入確認(rèn)的實(shí)現(xiàn)原則,即在銷(xiāo)售后確認(rèn)收入;(2)按配比原則確認(rèn)費(fèi)用。
3.以歷史成本對(duì)資產(chǎn)計(jì)價(jià)。
4.強(qiáng)調(diào)成本歸屬和流轉(zhuǎn)觀念。即將固定資產(chǎn)的購(gòu)入成本主觀地分配于各會(huì)計(jì)期,并將已耗用設(shè)備和原材料的原始成本歸屬于產(chǎn)品成本,傳統(tǒng)會(huì)計(jì)認(rèn)為會(huì)計(jì)不是一個(gè)計(jì)價(jià)過(guò)程,而是一個(gè)歷史成本的分配或歸屬過(guò)程,它并不考慮已耗用資產(chǎn)的現(xiàn)時(shí)成本。
初級(jí)財(cái)務(wù)會(huì)計(jì)定義:做好發(fā)票,合同,協(xié)議整理簽字,審批,記好入賬憑證等。
高級(jí)財(cái)務(wù)會(huì)計(jì)具有高級(jí)會(huì)計(jì)師資格證書(shū),具有會(huì)計(jì)工作的職能,更具備管理會(huì)計(jì),審計(jì)工作的素質(zhì),能擔(dān)當(dāng)公司財(cái)務(wù)總監(jiān)的重任,解決的問(wèn)題是經(jīng)濟(jì)事項(xiàng)的對(duì)外報(bào)告問(wèn)題,是向企業(yè)外部投資者、債權(quán)人以及其他與企業(yè)有利害關(guān)系的人提供有關(guān)企業(yè)財(cái)務(wù)狀況、經(jīng)營(yíng)情況和經(jīng)營(yíng)成果的信息,以滿(mǎn)足他們的決策對(duì)財(cái)務(wù)會(huì)計(jì)信息的需求。
高級(jí)財(cái)務(wù)會(huì)計(jì)理論源于一般財(cái)務(wù)會(huì)計(jì)理論,它是對(duì)一般財(cái)務(wù)會(huì)計(jì)理論的發(fā)展和延伸…
財(cái)務(wù)會(huì)計(jì)
[詞典] [計(jì)] financial accounting;
[例句]財(cái)務(wù)會(huì)計(jì)和管理會(huì)計(jì)。
Financial Accounting and Management Accounting.
1、考試云題庫(kù)支持按知識(shí)點(diǎn)進(jìn)行分類(lèi),支持多級(jí)樹(shù)狀子分類(lèi);支持批量修改、刪除、導(dǎo)出。支持可視化添加試題,支持Word、Excel、TXT模板批量導(dǎo)入試題。有單選題、多選題、不定項(xiàng)選擇題、填空題、判斷題、問(wèn)答題六種基本題型,還可以變通設(shè)置復(fù)雜組合題型,如材料題、完型填空、閱讀理解、聽(tīng)力、視頻等題型。
面試中被問(wèn)到抗壓力的問(wèn)題時(shí),可以針對(duì)以下問(wèn)題進(jìn)行回答:
1. 你對(duì)壓力的看法是什么?你認(rèn)為良好的壓力管理對(duì)于工作與生活的重要性是什么?
2. 你曾經(jīng)遇到過(guò)最大的壓力是什么?你是如何處理的?取得了什么成果?
3. 你如何預(yù)防壓力的堆積?平時(shí)都有哪些方法舒緩壓力?
4. 你在工作中是如何處理緊急事件的?在緊急事件發(fā)生時(shí),你又是如何平靜處理的?
5. 當(dāng)你感到應(yīng)對(duì)不了困難時(shí),你是如何處理自己的情緒的?是否有過(guò)跟同事或領(lǐng)導(dǎo)尋求幫助的經(jīng)驗(yàn)?
以上問(wèn)題的回答需要切實(shí)體現(xiàn)出應(yīng)聘者的應(yīng)對(duì)壓力的能力、態(tài)度和方法。需要注意的是,壓力是一種正常的工作與生活狀態(tài),壓力管理不是要消除壓力,而是要學(xué)會(huì)合理地面對(duì)與處理壓力,以達(dá)到更好的工作和生活效果。
應(yīng)該是校醫(yī)的工作范疇,急救處理,傳染病知識(shí)和健康教育,除專(zhuān)業(yè)知識(shí)外還會(huì)問(wèn)一些開(kāi)放性的題目,好好準(zhǔn)備下吧,祝你成功。
之前看了Mahout官方示例 20news 的調(diào)用實(shí)現(xiàn);于是想根據(jù)示例的流程實(shí)現(xiàn)其他例子。網(wǎng)上看到了一個(gè)關(guān)于天氣適不適合打羽毛球的例子。
訓(xùn)練數(shù)據(jù):
Day Outlook Temperature Humidity Wind PlayTennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
檢測(cè)數(shù)據(jù):
sunny,hot,high,weak
結(jié)果:
Yes=》 0.007039
No=》 0.027418
于是使用Java代碼調(diào)用Mahout的工具類(lèi)實(shí)現(xiàn)分類(lèi)。
基本思想:
1. 構(gòu)造分類(lèi)數(shù)據(jù)。
2. 使用Mahout工具類(lèi)進(jìn)行訓(xùn)練,得到訓(xùn)練模型。
3。將要檢測(cè)數(shù)據(jù)轉(zhuǎn)換成vector數(shù)據(jù)。
4. 分類(lèi)器對(duì)vector數(shù)據(jù)進(jìn)行分類(lèi)。
接下來(lái)貼下我的代碼實(shí)現(xiàn)=》
1. 構(gòu)造分類(lèi)數(shù)據(jù):
在hdfs主要?jiǎng)?chuàng)建一個(gè)文件夾路徑 /zhoujainfeng/playtennis/input 并將分類(lèi)文件夾 no 和 yes 的數(shù)據(jù)傳到hdfs上面。
數(shù)據(jù)文件格式,如D1文件內(nèi)容: Sunny Hot High Weak
2. 使用Mahout工具類(lèi)進(jìn)行訓(xùn)練,得到訓(xùn)練模型。
3。將要檢測(cè)數(shù)據(jù)轉(zhuǎn)換成vector數(shù)據(jù)。
4. 分類(lèi)器對(duì)vector數(shù)據(jù)進(jìn)行分類(lèi)。
這三步,代碼我就一次全貼出來(lái);主要是兩個(gè)類(lèi) PlayTennis1 和 BayesCheckData = =》
package myTesting.bayes;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.ToolRunner;
import org.apache.mahout.classifier.naivebayes.training.TrainNaiveBayesJob;
import org.apache.mahout.text.SequenceFilesFromDirectory;
import org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles;
public class PlayTennis1 {
private static final String WORK_DIR = "hdfs://192.168.9.72:9000/zhoujianfeng/playtennis";
/*
* 測(cè)試代碼
*/
public static void main(String[] args) {
//將訓(xùn)練數(shù)據(jù)轉(zhuǎn)換成 vector數(shù)據(jù)
makeTrainVector();
//產(chǎn)生訓(xùn)練模型
makeModel(false);
//測(cè)試檢測(cè)數(shù)據(jù)
BayesCheckData.printResult();
}
public static void makeCheckVector(){
//將測(cè)試數(shù)據(jù)轉(zhuǎn)換成序列化文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"testinput";
String output = WORK_DIR+Path.SEPARATOR+"tennis-test-seq";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean參數(shù)是,是否遞歸刪除的意思
fs.delete(out, true);
}
SequenceFilesFromDirectory sffd = new SequenceFilesFromDirectory();
String[] params = new String[]{"-i",input,"-o",output,"-ow"};
ToolRunner.run(sffd, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("文件序列化失??!");
System.exit(1);
}
//將序列化文件轉(zhuǎn)換成向量文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"tennis-test-seq";
String output = WORK_DIR+Path.SEPARATOR+"tennis-test-vectors";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean參數(shù)是,是否遞歸刪除的意思
fs.delete(out, true);
}
SparseVectorsFromSequenceFiles svfsf = new SparseVectorsFromSequenceFiles();
String[] params = new String[]{"-i",input,"-o",output,"-lnorm","-nv","-wt","tfidf"};
ToolRunner.run(svfsf, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("序列化文件轉(zhuǎn)換成向量失??!");
System.out.println(2);
}
}
public static void makeTrainVector(){
//將測(cè)試數(shù)據(jù)轉(zhuǎn)換成序列化文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"input";
String output = WORK_DIR+Path.SEPARATOR+"tennis-seq";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean參數(shù)是,是否遞歸刪除的意思
fs.delete(out, true);
}
SequenceFilesFromDirectory sffd = new SequenceFilesFromDirectory();
String[] params = new String[]{"-i",input,"-o",output,"-ow"};
ToolRunner.run(sffd, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("文件序列化失??!");
System.exit(1);
}
//將序列化文件轉(zhuǎn)換成向量文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"tennis-seq";
String output = WORK_DIR+Path.SEPARATOR+"tennis-vectors";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean參數(shù)是,是否遞歸刪除的意思
fs.delete(out, true);
}
SparseVectorsFromSequenceFiles svfsf = new SparseVectorsFromSequenceFiles();
String[] params = new String[]{"-i",input,"-o",output,"-lnorm","-nv","-wt","tfidf"};
ToolRunner.run(svfsf, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("序列化文件轉(zhuǎn)換成向量失??!");
System.out.println(2);
}
}
public static void makeModel(boolean completelyNB){
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"tennis-vectors"+Path.SEPARATOR+"tfidf-vectors";
String model = WORK_DIR+Path.SEPARATOR+"model";
String labelindex = WORK_DIR+Path.SEPARATOR+"labelindex";
Path in = new Path(input);
Path out = new Path(model);
Path label = new Path(labelindex);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean參數(shù)是,是否遞歸刪除的意思
fs.delete(out, true);
}
if(fs.exists(label)){
//boolean參數(shù)是,是否遞歸刪除的意思
fs.delete(label, true);
}
TrainNaiveBayesJob tnbj = new TrainNaiveBayesJob();
String[] params =null;
if(completelyNB){
params = new String[]{"-i",input,"-el","-o",model,"-li",labelindex,"-ow","-c"};
}else{
params = new String[]{"-i",input,"-el","-o",model,"-li",labelindex,"-ow"};
}
ToolRunner.run(tnbj, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("生成訓(xùn)練模型失??!");
System.exit(3);
}
}
}
package myTesting.bayes;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.mahout.classifier.naivebayes.BayesUtils;
import org.apache.mahout.classifier.naivebayes.NaiveBayesModel;
import org.apache.mahout.classifier.naivebayes.StandardNaiveBayesClassifier;
import org.apache.mahout.common.Pair;
import org.apache.mahout.common.iterator.sequencefile.PathType;
import org.apache.mahout.common.iterator.sequencefile.SequenceFileDirIterable;
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.math.Vector;
import org.apache.mahout.math.Vector.Element;
import org.apache.mahout.vectorizer.TFIDF;
import com.google.common.collect.ConcurrentHashMultiset;
import com.google.common.collect.Multiset;
public class BayesCheckData {
private static StandardNaiveBayesClassifier classifier;
private static Map<String, Integer> dictionary;
private static Map<Integer, Long> documentFrequency;
private static Map<Integer, String> labelIndex;
public void init(Configuration conf){
try {
String modelPath = "/zhoujianfeng/playtennis/model";
String dictionaryPath = "/zhoujianfeng/playtennis/tennis-vectors/dictionary.file-0";
String documentFrequencyPath = "/zhoujianfeng/playtennis/tennis-vectors/df-count";
String labelIndexPath = "/zhoujianfeng/playtennis/labelindex";
dictionary = readDictionnary(conf, new Path(dictionaryPath));
documentFrequency = readDocumentFrequency(conf, new Path(documentFrequencyPath));
labelIndex = BayesUtils.readLabelIndex(conf, new Path(labelIndexPath));
NaiveBayesModel model = NaiveBayesModel.materialize(new Path(modelPath), conf);
classifier = new StandardNaiveBayesClassifier(model);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("檢測(cè)數(shù)據(jù)構(gòu)造成vectors初始化時(shí)報(bào)錯(cuò)。。。。");
System.exit(4);
}
}
/**
* 加載字典文件,Key: TermValue; Value:TermID
* @param conf
* @param dictionnaryDir
* @return
*/
private static Map<String, Integer> readDictionnary(Configuration conf, Path dictionnaryDir) {
Map<String, Integer> dictionnary = new HashMap<String, Integer>();
PathFilter filter = new PathFilter() {
@Override
public boolean accept(Path path) {
String name = path.getName();
return name.startsWith("dictionary.file");
}
};
for (Pair<Text, IntWritable> pair : new SequenceFileDirIterable<Text, IntWritable>(dictionnaryDir, PathType.LIST, filter, conf)) {
dictionnary.put(pair.getFirst().toString(), pair.getSecond().get());
}
return dictionnary;
}
/**
* 加載df-count目錄下TermDoc頻率文件,Key: TermID; Value:DocFreq
* @param conf
* @param dictionnaryDir
* @return
*/
private static Map<Integer, Long> readDocumentFrequency(Configuration conf, Path documentFrequencyDir) {
Map<Integer, Long> documentFrequency = new HashMap<Integer, Long>();
PathFilter filter = new PathFilter() {
@Override
public boolean accept(Path path) {
return path.getName().startsWith("part-r");
}
};
for (Pair<IntWritable, LongWritable> pair : new SequenceFileDirIterable<IntWritable, LongWritable>(documentFrequencyDir, PathType.LIST, filter, conf)) {
documentFrequency.put(pair.getFirst().get(), pair.getSecond().get());
}
return documentFrequency;
}
public static String getCheckResult(){
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String classify = "NaN";
BayesCheckData cdv = new BayesCheckData();
cdv.init(conf);
System.out.println("init done...............");
Vector vector = new RandomAccessSparseVector(10000);
TFIDF tfidf = new TFIDF();
//sunny,hot,high,weak
Multiset<String> words = ConcurrentHashMultiset.create();
words.add("sunny",1);
words.add("hot",1);
words.add("high",1);
words.add("weak",1);
int documentCount = documentFrequency.get(-1).intValue(); // key=-1時(shí)表示總文檔數(shù)
for (Multiset.Entry<String> entry : words.entrySet()) {
String word = entry.getElement();
int count = entry.getCount();
Integer wordId = dictionary.get(word); // 需要從dictionary.file-0文件(tf-vector)下得到wordID,
if (StringUtils.isEmpty(wordId.toString())){
continue;
}
if (documentFrequency.get(wordId) == null){
continue;
}
Long freq = documentFrequency.get(wordId);
double tfIdfValue = tfidf.calculate(count, freq.intValue(), 1, documentCount);
vector.setQuick(wordId, tfIdfValue);
}
// 利用貝葉斯算法開(kāi)始分類(lèi),并提取得分最好的分類(lèi)label
Vector resultVector = classifier.classifyFull(vector);
double bestScore = -Double.MAX_VALUE;
int bestCategoryId = -1;
for(Element element: resultVector.all()) {
int categoryId = element.index();
double score = element.get();
System.out.println("categoryId:"+categoryId+" score:"+score);
if (score > bestScore) {
bestScore = score;
bestCategoryId = categoryId;
}
}
classify = labelIndex.get(bestCategoryId)+"(categoryId="+bestCategoryId+")";
return classify;
}
public static void printResult(){
System.out.println("檢測(cè)所屬類(lèi)別是:"+getCheckResult());
}
}